Pub Date : 2025-10-13DOI: 10.1163/22134808-bja10162
Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos
It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.
{"title":"Older Adults with Clinically Normal Sensory and Cognitive Abilities Perceive Audiovisual Simultaneity and Temporal Order Differently than Younger Adults.","authors":"Stephanie Yung, M Kathleen Pichora-Fuller, Dirk B Walther, Raheleh Saryazdi, Jennifer L Campos","doi":"10.1163/22134808-bja10162","DOIUrl":"10.1163/22134808-bja10162","url":null,"abstract":"<p><p>It is well established that individual sensory and cognitive abilities often decline with older age; however, previous studies examining whether multisensory processes and multisensory integration also change with older age have been inconsistent. One possible reason for these inconsistencies may be due to differences across studies in how sensory and cognitive abilities have been characterized and controlled for in older adult participant groups. The current study examined whether multisensory (audiovisual) synchrony perception is different in younger and older adults using the audiovisual simultaneity judgement (SJ) and temporal order judgement (TOJ) tasks and explored whether performance on these audiovisual tasks was associated with unisensory (hearing, vision) and cognitive (global cognition and executive functioning) abilities within clinically normal limits. Healthy younger and older adults completed audiovisual SJ and TOJ tasks. Auditory-only and visual-only SJ tasks were also completed independently to assess temporal processing in hearing and vision. Older adults completed standardized assessments of hearing, vision, and cognition. Results showed that, compared to younger adults, older adults had wider temporal binding windows in the audiovisual SJ and TOJ tasks and larger points of subjective simultaneity in the TOJ task. No significant associations were found among the unisensory (standard baseline and unisensory SJ), cognitive, or audiovisual (SJ, TOJ) measures. These findings suggest that audiovisual integrative processes change with older age, even within clinically normal sensory and cognitive abilities.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"485-515"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.
{"title":"Temporal Coherence in Crossmodal Perceptual Binding: Implications for the Design of a Real-Time Multisensory Speech Recognition Algorithm.","authors":"Yonghee Oh, Emily Keller, Audie Gilchrist, Kayla Borges, Kelli Meyers","doi":"10.1163/22134808-bja10166","DOIUrl":"https://doi.org/10.1163/22134808-bja10166","url":null,"abstract":"<p><p>The inputs delivered to different sensory organs provide complementary information about the environment. Many previous studies have demonstrated that presenting multisensory information (e.g., visual) can improve auditory perception, especially in noisy environments. Understanding temporal asynchronicity between different sensory modalities is fundamentally important to process and deliver multisensory information in real time with minimal time delay. The purpose of this study was to quantify the average limit of temporal asynchronicity where multisensory stimuli are likely to be perceptually integrated. Twenty adults participated in simultaneity judgment measurements using 100-ms stimuli in three different sensory modalities (auditory, visual, and tactile), and their test-retest reliability of the simultaneity judgments was verified on a weekly basis by three separate tests. Two crossmodal temporal coherence cues were examined: the temporal binding window (TBW), denoting a time frame where two sensory modalities were perceptually integrated, and the point of subjective simultaneity (PSS), denoting a perceptual lead toward one modality over others. According to the average results, the TBWs occurred in 389 ms (auditory-visual, AV), 324 ms (auditory-tactile, AT), and 299 ms (visual-tactile, VT), and the PSSs were shifted 105 ms toward a visual cue, 16 ms toward a tactile cue, and 77 ms toward a visual cue for the AV, AT, and VT conditions, respectively. Over all three crossmodalities, the test-retest reliability averaged less than 50 ms for the TBW and 30 ms for the PSS. The findings in this study might specify a minimum amount of time delay for real-time multisensory processing, suggesting temporal parameters for future developments in multisensory hearing assistive devices.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"273-288"},"PeriodicalIF":1.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1163/22134808-bja10169
Sinan Haliyo
Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.
{"title":"Haptic Microscopy: Tactile Perception of Small Scales.","authors":"Sinan Haliyo","doi":"10.1163/22134808-bja10169","DOIUrl":"https://doi.org/10.1163/22134808-bja10169","url":null,"abstract":"<p><p>Over three PhD theses co-supervised with Vincent Hayward, we developed a technique to scale up microscale force interactions to a user's hand with near-perfect linear amplification. While this challenge could be approached through robotic teleoperation - using a precise robot manipulator with force sensing controlled via a haptic device - the required bilateral coupling between different physical scales demands extremely large homothetic gains (typically ×10 000 to ×100 000) in both displacement and force. These large gains compromise transparency, as device imperfections and stability requirements mask the faithful perception of microscale phenomena. To overcome this limitation, we developed the concept of haptic microscopy. We designed a complete microscale teleoperation system from the ground up, featuring a custom robotic manipulator and novel haptic device, implementing direct bilateral coupling with pure gains. This electromechanical system successfully amplifies microscale forces several thousand times, enabling operators to better understand the physical landscape they are manipulating. Our paper details the design process for both the microtool and haptic device, and presents experiments demonstrating users' ability to tactilely explore microscale interactions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-25"},"PeriodicalIF":1.5,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10156
Benjamin A Rowland
The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.
{"title":"Introduction to the Special Issue on The Merging of the Senses.","authors":"Benjamin A Rowland","doi":"10.1163/22134808-bja10156","DOIUrl":"10.1163/22134808-bja10156","url":null,"abstract":"<p><p>The 1993 book The Merging of the Senses has proven to be a profoundly impactful text that has shaped research programs studying the interaction between the senses for the last three decades. The book combines skillful and approachable narration with engaging illustrations and was received with rave reviews on publication as one of the first comprehensive approaches to the subject. It captures the impressive breadth of domains in which multisensory integration impacts the daily life of all animals and promotes a systematic approach to understanding its underlying operation by interrogating the nervous system at multiple levels, from the peripheral organ, through convergence, integration, and decision-making, to effected behavior. Thirty years later, the multiple generations of scientists that have been inspired by the text have built an amazing structure on this foundation, through advancing refinements in theory and experimental technique, investigation of new domains and species, an understanding of the origins, maturation, and plasticity of the process, the translation of biological principles to artificial systems, and discovering new applications of multisensory research in clinical and rehabilitative domains.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"143-152"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10159
Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison
In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.
{"title":"Tingle-Eliciting Audiovisual Properties of Autonomous Sensory Meridian Response (ASMR) Videos.","authors":"Madeleine R Jones, Aurelia Daniels, Kajsa Igelström, Juulia Suvilehto, India Morrison","doi":"10.1163/22134808-bja10159","DOIUrl":"10.1163/22134808-bja10159","url":null,"abstract":"<p><p>In autonomous sensory meridian response (ASMR), certain audiovisual stimuli can evoke a range of spontaneous sensations, in particular a pleasant tingling that often originates across the scalp, spreading down the spine toward the shoulders ('tingles'). Major drivers of tingle elicitation in ASMR stimuli are often 'crisp' sounds created by whispering or manipulating an object, as well as social-attentional features such as implied direct attention to the viewer. However, relationships between specific stimulus properties and ASMR-typical subjective responses remain to be fully mapped. In two studies, we therefore sought to isolate specific tingle-eliciting stimulus features by comparing tingle reports for ASMR video clips between ASMR experiencers and control participants. The first study compared intact versus desynchronized video clips to probe whether the presence of audiovisual features would be sufficient to elicit tingles, or whether these features needed to be presented in a coherent sequence. The second study compared clips with filtered and unfiltered audio, demonstrating that 'crisp' sounds had greater tingle efficacy over 'blunted' sounds. Overall, the presence of stimulus features in both synchronized and desynchronized clips was effective in eliciting self-reported subjective responses (tingle frequency), while intact clips involving object manipulation and speech sounds were most effective. An exploratory analysis suggested that viewer-oriented implied attention also influenced tingle ratings. These findings further pinpoint the importance of object and speech sounds in eliciting ASMR tingle responses, supporting the proposition that audiovisual stimulus features implying proximity to the viewer play a key role.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"427-452"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.
{"title":"Pseudo-Dribbling Experience Using Single Overlapped Vibrotactile Stimulation Simultaneously to the Hand and the Feet.","authors":"Takumi Kuhara, Kakagu Komazaki, Junji Watanabe, Yoshihiro Tanaka","doi":"10.1163/22134808-bja10157","DOIUrl":"https://doi.org/10.1163/22134808-bja10157","url":null,"abstract":"<p><p>When designing a haptic interface, simplicity is crucial to avoid negative effects caused by excessive weight and complexity. Using multimodal information, haptic illusions, and providing context are known to create simpler interfaces. We have previously proposed the use of single overlapped vibrotactile stimulation (SOVS) for presenting spatiotemporal tactile perception, a method that simultaneously presents overlapped waveforms to multiple body parts. There, the acceleration measured from a person dribbling a basketball with an accelerometer positioned on the index finger and the floor was overlapped to present as stimuli. When the stimuli were presented simultaneously to the hand and the feet, it demonstrated a dribbling sensation, like an imaginary ball moving back and forth between the hand and the feet. This demonstrated the potential to eliminate the need for time synchronization and reduce the number of required channels, ultimately leading to the development of simple haptic interfaces that enhance an immersive experience. In this paper, we aim to investigate the key factor behind the perception of SOVS using simple vibrotactile stimuli. The first experiment measured the occurrence rate of the dribbling feeling for different combinations of prepared stimuli, and the results show that the combination of two different input amplitudes is crucial for the occurrence rate of the phenomenon. The second experiment assessed how realistic each stimulus, presented to the hand and the feet separately, felt to the participants. The results show that for the hand, the perceived reality corresponded to the strength of input amplitude, whereas the second-strongest input amplitude was perceived as most realistic for the feet. This suggests that when the combination consists of duplicate input amplitudes and/or those with low perceived reality, the occurrence rate tends to decrease.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-20"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10160
Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi
Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.
{"title":"Autistic Traits and Temporal Integration of Auditory and Visual Stimuli in the General Population: The Role of Imagination.","authors":"Yurika Tsuji, Yuki Nishiguchi, Akari Noda, Shu Imaizumi","doi":"10.1163/22134808-bja10160","DOIUrl":"10.1163/22134808-bja10160","url":null,"abstract":"<p><p>Autistic individuals experience temporal integration difficulties in some sensory modalities that may be related to imagination difficulties. In this study, we tested the hypotheses that among Japanese university students in the general population, (1) higher autistic traits and (2) greater imagination difficulties are associated with lower performance in tasks requiring temporal integration. Two tasks were used to assess their temporal integration abilities: a speech-in-noise test using noise with temporal dips in the auditory modality and a slit-viewing task in the visual modality. The results showed that low performance in the speech-in-noise test was related to autistic traits and some aspects of imagination difficulties, whereas the slit-viewing task was related to neither autistic traits nor imagination difficulties. The ability to temporally integrate fragments of auditory information is expected to be associated with performance in perceiving speech in noise with temporal dips. The difficulties in perceiving sensory information as a single unified percept using priors may cause difficulties in temporally integrating auditory information and perceiving speech in noise. Furthermore, the structural equation modeling suggests that imagination difficulties are linked to difficulties in perceiving speech in noise with temporal dips, which links to social impairments.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"453-483"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1163/22134808-bja10158
Aurore Zelazny, Thomas Alrik Sørensen
Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.
{"title":"Pitch-Color Associations are Context-Dependent and Driven by Lightness.","authors":"Aurore Zelazny, Thomas Alrik Sørensen","doi":"10.1163/22134808-bja10158","DOIUrl":"10.1163/22134808-bja10158","url":null,"abstract":"<p><p>Pitch-color associations have been widely explored in the context of cross-modal correspondences. Previous research indicates that pitch height maps onto lightness, and that high pitches are often associated with yellow and low pitches with blue. However, whether these associations are absolute or relative remains unclear. This study investigated the effect of context on pitch-color associations by presenting seven pitch stimuli (C4-B4) in randomized, ascending, and descending orders. A large sample ( N = 6626) was asked to select colors for each pitch using a color wheel. Results revealed that pitch height was linearly mapped onto lightness, with higher pitches associated with lighter colors. Notably, this mapping was influenced by context, as ascending sequences produced lighter colors and descending sequences resulted in darker colors compared to randomized presentations. Furthermore, lightness associations developed progressively, going from binary to linear as trials progressed. Saturation on the other hand did not follow a linear pattern but peaked at mid-range pitches and was not influenced by context. Additionally, compared to randomized presentation, color associations show a downward shift (i.e., reported for lower pitches) in the ascending presentation, and an upward shift (i.e., reported for higher pitches) in the descending presentation. These findings suggest that pitch-color associations are relative rather than absolute, possibly due to low ability to categorize pitches in the general population, with lightness appearing to emerge as the primary factor for color choices. This study contributes to the understanding of associations across sensory modalities, which may be a promising venue to investigate hidden cognitive processes such as sensory illusions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"403-426"},"PeriodicalIF":1.5,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1163/22134808-bja10153
David A Tovar, Marcus R Watson, David J Lewkowicz, Monica Gori, Micah M Murray, Mark T Wallace
Multisensory integration (MSI) is a core neurobehavioral operation that enhances our ability to perceive, decide, and act by combining information from different sensory modalities. This integrative capability is essential for efficiently navigating complex environments and responding to their multisensory nature. One of the powerful behavioral benefits of MSI is in speeding responses. To evaluate this speeding, traditional research in MSI often relies on so-called race models, which predict reaction times (RTs) based on the assumption that information from the different sensory modalities is initially processed independently. When observed RTs are faster than those predicted by these models, it indicates the presence of true convergence and integration of multisensory information prior to the initiation of the motor response. Despite the strong applicability of race models in MSI research, analysis of multisensory RT data often poses challenges for researchers, particularly in managing, interpreting and modeling large datasets or a collection of datasets. To surmount these challenges, we developed a user-friendly graphical user interface (GUI) packaged into a freely available software application that is compatible with both Windows and Mac and that requires no programming expertise. This tool simplifies the processes of data loading, filtering, and statistical analysis. It allows the calculation and visualization of RTs across different sensory modalities, the performance of robust statistical tests, and the testing of race model violations. By integrating these capabilities into a single platform, the CART-GUI facilitates MSI analyses and makes it accessible to a wider range of users, from novice researchers to experts in the field. The GUI's user-friendly design and advanced analytical features will allow for valuable insights into the mechanisms underlying MSI and contribute to the advancement of research in this domain.
{"title":"CART: The Comprehensive Analysis of Reaction Times - GUI for Multisensory Processes and Race Models.","authors":"David A Tovar, Marcus R Watson, David J Lewkowicz, Monica Gori, Micah M Murray, Mark T Wallace","doi":"10.1163/22134808-bja10153","DOIUrl":"https://doi.org/10.1163/22134808-bja10153","url":null,"abstract":"<p><p>Multisensory integration (MSI) is a core neurobehavioral operation that enhances our ability to perceive, decide, and act by combining information from different sensory modalities. This integrative capability is essential for efficiently navigating complex environments and responding to their multisensory nature. One of the powerful behavioral benefits of MSI is in speeding responses. To evaluate this speeding, traditional research in MSI often relies on so-called race models, which predict reaction times (RTs) based on the assumption that information from the different sensory modalities is initially processed independently. When observed RTs are faster than those predicted by these models, it indicates the presence of true convergence and integration of multisensory information prior to the initiation of the motor response. Despite the strong applicability of race models in MSI research, analysis of multisensory RT data often poses challenges for researchers, particularly in managing, interpreting and modeling large datasets or a collection of datasets. To surmount these challenges, we developed a user-friendly graphical user interface (GUI) packaged into a freely available software application that is compatible with both Windows and Mac and that requires no programming expertise. This tool simplifies the processes of data loading, filtering, and statistical analysis. It allows the calculation and visualization of RTs across different sensory modalities, the performance of robust statistical tests, and the testing of race model violations. By integrating these capabilities into a single platform, the CART-GUI facilitates MSI analyses and makes it accessible to a wider range of users, from novice researchers to experts in the field. The GUI's user-friendly design and advanced analytical features will allow for valuable insights into the mechanisms underlying MSI and contribute to the advancement of research in this domain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"38 4-5","pages":"211-230"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-15DOI: 10.1163/22134808-bja10154
Irene Petrizzo, Guido Marco Cicchini, David Charles Burr, Giovanni Anobile
Recently, analysis of differences in individual performance has provided evidence to support the existence of sensorimotor number mechanisms. The individual difference technique assumes that performance for stimuli processed by the same mechanism should be more correlated between individuals than are stimuli processed by different mechanisms. Here we replicated this finding and generalized the results to other sensory modalities. We measured performance for the same participants on three different numerical tasks: a sensorimotor task (series of actions), a temporal numerosity task (series of flashes) and a spatial numerosity task (array of dots). We then searched for tuning selectivity within each task and between task pairs by analysing patterns of correlation between tested numerosities. Correlation within each task showed tuning selectivity in all the three cases, with high positive correlations for nearby target numbers that decreased with numerical distance, providing psychophysical and physiological evidence for the existence of multisensory numerosity channels. Cross-task correlations also suggested a shared tuning between the sensorimotor and temporal visual numerosity, which points to channels responsible for performance in both visual and motor temporal number tasks. However, no shared tuning emerged between spatial visual numerosity and the other two tasks, suggesting partially different patterns of encoding for temporal and spatial numerosity. Taken together our results provide evidence for a similar functional architecture for the three tasks tested here, but also imply that there is no full overlap of shared resources between numerosity domains, suggesting at least partially separated mechanisms of encoding.
{"title":"Multisensory Number Channels Derived from Individual Differences.","authors":"Irene Petrizzo, Guido Marco Cicchini, David Charles Burr, Giovanni Anobile","doi":"10.1163/22134808-bja10154","DOIUrl":"10.1163/22134808-bja10154","url":null,"abstract":"<p><p>Recently, analysis of differences in individual performance has provided evidence to support the existence of sensorimotor number mechanisms. The individual difference technique assumes that performance for stimuli processed by the same mechanism should be more correlated between individuals than are stimuli processed by different mechanisms. Here we replicated this finding and generalized the results to other sensory modalities. We measured performance for the same participants on three different numerical tasks: a sensorimotor task (series of actions), a temporal numerosity task (series of flashes) and a spatial numerosity task (array of dots). We then searched for tuning selectivity within each task and between task pairs by analysing patterns of correlation between tested numerosities. Correlation within each task showed tuning selectivity in all the three cases, with high positive correlations for nearby target numbers that decreased with numerical distance, providing psychophysical and physiological evidence for the existence of multisensory numerosity channels. Cross-task correlations also suggested a shared tuning between the sensorimotor and temporal visual numerosity, which points to channels responsible for performance in both visual and motor temporal number tasks. However, no shared tuning emerged between spatial visual numerosity and the other two tasks, suggesting partially different patterns of encoding for temporal and spatial numerosity. Taken together our results provide evidence for a similar functional architecture for the three tasks tested here, but also imply that there is no full overlap of shared resources between numerosity domains, suggesting at least partially separated mechanisms of encoding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"383-402"},"PeriodicalIF":1.5,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145410749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}