Pub Date : 2024-11-28DOI: 10.1163/22134808-bja10136
Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube
Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.
{"title":"Research Priorities for Autonomous Sensory Meridian Response: An Interdisciplinary Delphi Study.","authors":"Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube","doi":"10.1163/22134808-bja10136","DOIUrl":"https://doi.org/10.1163/22134808-bja10136","url":null,"abstract":"<p><p>Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"499-528"},"PeriodicalIF":1.8,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1163/22134808-bja10138
Yusuke Suzuki, Masayoshi Nagai
Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.
{"title":"Visual Upward/Downward Motion Elicits Fast and Fluent High-/Low-Pitched Speech Production.","authors":"Yusuke Suzuki, Masayoshi Nagai","doi":"10.1163/22134808-bja10138","DOIUrl":"https://doi.org/10.1163/22134808-bja10138","url":null,"abstract":"<p><p>Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"529-555"},"PeriodicalIF":1.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1163/22134808-bja10134
Luke E Miller, Alessandro Farnè
Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.
{"title":"Extending Tactile Space With Handheld Tools: A Re-Analysis and Review.","authors":"Luke E Miller, Alessandro Farnè","doi":"10.1163/22134808-bja10134","DOIUrl":"https://doi.org/10.1163/22134808-bja10134","url":null,"abstract":"<p><p>Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-12DOI: 10.1163/22134808-bja10137
Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko
Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.
{"title":"Positive Attributable Visual Sources Attenuate the Impact of Trigger Sounds in Misophonia.","authors":"Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko","doi":"10.1163/22134808-bja10137","DOIUrl":"https://doi.org/10.1163/22134808-bja10137","url":null,"abstract":"<p><p>Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"475-498"},"PeriodicalIF":1.8,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1163/22134808-bja10135
Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton
In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.
{"title":"Cross-Modal Cues Improve the Detection of Synchronized Targets during Human Foraging.","authors":"Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton","doi":"10.1163/22134808-bja10135","DOIUrl":"https://doi.org/10.1163/22134808-bja10135","url":null,"abstract":"<p><p>In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"457-474"},"PeriodicalIF":1.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.
{"title":"The Power of Trial History: How Previous Trial Shapes Audiovisual Integration.","authors":"Xiaoyu Tang, Wanlong Liu, Yingnan Wu, Rongxia Ren, Jiaying Sun, Jiajia Yang, Aijun Wang, Ming Zhang","doi":"10.1163/22134808-bja10133","DOIUrl":"https://doi.org/10.1163/22134808-bja10133","url":null,"abstract":"<p><p>Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"431-456"},"PeriodicalIF":1.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-08DOI: 10.1163/22134808-bja10132
Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson
Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.
{"title":"Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults.","authors":"Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson","doi":"10.1163/22134808-bja10132","DOIUrl":"10.1163/22134808-bja10132","url":null,"abstract":"<p><p>Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"413-430"},"PeriodicalIF":1.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1163/22134808-bja10131
Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld
Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.
{"title":"The Impact of Viewing Distance and Proprioceptive Manipulations on a Virtual Reality Based Balance Test.","authors":"Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld","doi":"10.1163/22134808-bja10131","DOIUrl":"10.1163/22134808-bja10131","url":null,"abstract":"<p><p>Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"395-412"},"PeriodicalIF":1.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1163/22134808-bja10130
Charles Spence
The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.
{"title":"What is the Relation between Chemosensory Perception and Chemosensory Mental Imagery?","authors":"Charles Spence","doi":"10.1163/22134808-bja10130","DOIUrl":"https://doi.org/10.1163/22134808-bja10130","url":null,"abstract":"<p><p>The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-30"},"PeriodicalIF":1.8,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1163/22134808-bja10129
EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang
Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.
{"title":"Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS.","authors":"EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang","doi":"10.1163/22134808-bja10129","DOIUrl":"10.1163/22134808-bja10129","url":null,"abstract":"<p><p>Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 4-5","pages":"341-363"},"PeriodicalIF":1.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}