首页 > 最新文献

Multisensory Research最新文献

英文 中文
Research Priorities for Autonomous Sensory Meridian Response: An Interdisciplinary Delphi Study. 自主感觉经络反应的研究重点:一项跨学科德尔菲研究。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-28 DOI: 10.1163/22134808-bja10136
Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube

Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.

自主感觉经络反应(ASMR)是一种多感官体验,通常与放松的感觉和意识改变有关,由耳语、重复动作和密切的个人关注等刺激引起。自2015年以来,ASMR研究发展迅速,跨越了从神经科学到媒体研究的各个学科,但缺乏合作或跨学科的方法。为了为未来的ASMR研究建立一个有凝聚力和联系的结构,我们对来自不同学科的ASMR专家、从业者、社区成员和研究人员进行了一项改进的德尔菲研究。98位参与者就ASMR研究重点提出了451条建议,这些建议集中在13个关键领域:(1)ASMR的定义、概念澄清和测量;(2) ASMR的起源与发展;(3) ASMR神经生理学;(4)了解ASMR触发因素;(5)影响经历/引发ASMR可能性的因素;(6) ASMR与个体/文化差异;(7) ASMR与感官;(8) ASMR与社会亲密关系;(9) ASMR对普通人群的正面和负面影响;(10) ASMR在临床中的治疗应用;(11)长期使用ASMR的影响;(12) ASMR平台与技术;(13) ASMR社区、文化和实践。这些是由70%的初始参与者使用最佳/最差缩放方法投票选出的。由此产生的议程为ASMR研究提供了一个清晰的地图,使新的和现有的研究人员能够定位于该领域的重要问题,并激发跨学科合作。
{"title":"Research Priorities for Autonomous Sensory Meridian Response: An Interdisciplinary Delphi Study.","authors":"Thomas J Hostler, Giulia L Poerio, Clau Nader, Safiyya Mank, Andrew C Lin, Mario Villena-González, Nate Plutzik, Nitin K Ahuja, Daniel H Baker, Scott Bannister, Emma L Barratt, Stacey A Bedwell, Pierre-Edouard Billot, Emma Blakey, Flavia Cardini, Daniella K Cash, Nick J Davis, Bleiz M Del Sette, Mercede Erfanian, Josephine R Flockton, Beverley Fredborg, Helge Gillmeister, Emma Gray, Sarah M Haigh, Laura L Heisick, Agnieszka Janik McErlean, Helle Breth Klausen, Hirohito M Kondo, Franzisca Maas, L Taylor Maurand, Lawrie S McKay, Marco Mozzoni, Gabriele Navyte, Jessica A Ortega-Balderas, Emma C Palmer-Cooper, Craig A H Richard, Natalie Roberts, Vincenzo Romei, Felix Schoeller, Steven D Shaw, Julia Simner, Stephen D Smith, Eva Specker, Angelica Succi, Niilo V Valtakari, Jennie Weinheimer, Jasper Zehetgrube","doi":"10.1163/22134808-bja10136","DOIUrl":"https://doi.org/10.1163/22134808-bja10136","url":null,"abstract":"<p><p>Autonomous Sensory Meridian Response (ASMR) is a multisensory experience most often associated with feelings of relaxation and altered consciousness, elicited by stimuli which include whispering, repetitive movements, and close personal attention. Since 2015, ASMR research has grown rapidly, spanning disciplines from neuroscience to media studies but lacking a collaborative or interdisciplinary approach. To build a cohesive and connected structure for ASMR research moving forwards, a modified Delphi study was conducted with ASMR experts, practitioners, community members, and researchers from various disciplines. Ninety-eight participants provided 451 suggestions for ASMR research priorities which were condensed into 13 key areas: (1) Definition, conceptual clarification, and measurement of ASMR; (2) Origins and development of ASMR; (3) Neurophysiology of ASMR; (4) Understanding ASMR triggers; (5) Factors affecting the likelihood of experiencing/eliciting ASMR; (6) ASMR and individual/cultural differences; (7) ASMR and the senses; (8) ASMR and social intimacy; (9) Positive and negative consequences of ASMR in the general population; (10) Therapeutic applications of ASMR in clinical contexts; (11) Effects of long-term ASMR use; (12) ASMR platforms and technology; (13) ASMR community, culture, and practice. These were voted on by 70% of the initial participant pool using best/worst scaling methods. The resulting agenda provides a clear map for ASMR research to enable new and existing researchers to orient themselves towards important questions for the field and to inspire interdisciplinary collaborations.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"499-528"},"PeriodicalIF":1.8,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Upward/Downward Motion Elicits Fast and Fluent High-/Low-Pitched Speech Production. 视觉上/下运动引出快速流畅的高/低音调语音。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-19 DOI: 10.1163/22134808-bja10138
Yusuke Suzuki, Masayoshi Nagai

Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.

当视觉运动向上或向下时,被试倾向于产生更高或更低的音高,这表明视觉和言语产生过程之间存在音高运动的对应关系。然而,之前的研究受到了一些因素的影响,比如发音单词的含义以及与元音相关的固有音高或舌头运动。为了解决这些问题,我们研究了简单视觉运动和音调语音产生之间的音调运动对应关系。参与者被要求对一个向上或向下的视觉运动刺激发出一个高音调或低音调的无意义的单元音[a]。使用单个元音,我们消除了与多个元音的意义、固有音高和舌头运动相关的伪影。结果显示,当音高与视觉运动相对应(一致条件)时,声音反应比音高与视觉运动相对应(不一致条件)时更快。这一结果表明,语音产生中的音高-运动对应关系并不依赖于所发声词的刺激意义、固有音高或舌头运动。换句话说,目前的研究表明,音高-运动的对应关系可以更简洁地解释为简单感觉(视觉运动)和运动(音高)特征之间的联系。此外,声学分析显示,与视觉运动一致的语音产生表现出更低的压力,更大的信心和更高的声音流畅性。
{"title":"Visual Upward/Downward Motion Elicits Fast and Fluent High-/Low-Pitched Speech Production.","authors":"Yusuke Suzuki, Masayoshi Nagai","doi":"10.1163/22134808-bja10138","DOIUrl":"https://doi.org/10.1163/22134808-bja10138","url":null,"abstract":"<p><p>Participants tend to produce a higher or lower vocal pitch in response to upward or downward visual motion, suggesting a pitch-motion correspondence between the visual and speech production processes. However, previous studies were contaminated by factors such as the meaning of vocalized words and the intrinsic pitch or tongue movements associated with the vowels. To address these issues, we examined the pitch-motion correspondence between simple visual motion and pitched speech production. Participants were required to produce a high- or low-pitched meaningless single vowel [a] in response to the upward or downward direction of a visual motion stimulus. Using a single vowel, we eliminated the artifacts related to the meaning, intrinsic pitch, and tongue movements of multiple vocalized vowels. The results revealed that vocal responses were faster when the pitch corresponded to the visual motion (consistent condition) than when it did not (inconsistent condition). This result indicates that the pitch-motion correspondence in speech production does not depend on the stimulus meaning, intrinsic pitch, or tongue movement of the vocalized words. In other words, the present study suggests that the pitch-motion correspondence can be explained more parsimoniously as an association between simple sensory (visual motion) and motoric (vocal pitch) features. Additionally, acoustic analysis revealed that speech production aligned with visual motion exhibited lower stress, greater confidence, and higher vocal fluency.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"529-555"},"PeriodicalIF":1.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extending Tactile Space With Handheld Tools: A Re-Analysis and Review. 用手持工具扩展触觉空间:再分析与回顾。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-19 DOI: 10.1163/22134808-bja10134
Luke E Miller, Alessandro Farnè

Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.

工具可以将触觉扩展到身体之外,允许用户提取环境中远端物体的感官信息。虽然在过去的几十年里,关于这一主题的研究已经开始,但人们对延伸触摸的神经计算机制知之甚少。2016年,与我们已故的合作者Vincent Hayward一起,我们开始了一系列的研究,试图填补这一空白。我们特别关注的是在杆子表面定位触摸的能力,就好像它是身体的一部分一样。在过去的几年里,我们进行了八次行为实验,所有这些实验都发现人类在工具延伸的触觉定位方面非常准确。在本文中,我们对这些发现进行了模型驱动的重新分析,着眼于估计将感官输入映射到空间感知的潜在参数。这种重新分析表明,用户几乎可以完美地在手持工具上定位触摸。这就提出了一个问题:人类是如何如此擅长定位触摸惰性的非物质物体的?本文的剩余部分主要关注这一过程的三个方面,它们占据了我们与Vincent合作的大部分时间:参与者用于本地化的机械信息;神经系统将信息转化为空间感知的速度;以及基于身体的计算是否会被重新用于工具扩展触摸。总之,这些研究强调了身体和工具之间的特殊关系。
{"title":"Extending Tactile Space With Handheld Tools: A Re-Analysis and Review.","authors":"Luke E Miller, Alessandro Farnè","doi":"10.1163/22134808-bja10134","DOIUrl":"https://doi.org/10.1163/22134808-bja10134","url":null,"abstract":"<p><p>Tools can extend the sense of touch beyond the body, allowing the user to extract sensory information about distal objects in their environment. Though research on this topic has trickled in over the last few decades, little is known about the neurocomputational mechanisms of extended touch. In 2016, along with our late collaborator Vincent Hayward, we began a series of studies that attempted to fill this gap. We specifically focused on the ability to localize touch on the surface of a rod, as if it were part of the body. We have conducted eight behavioral experiments over the last several years, all of which have found that humans are incredibly accurate at tool-extended tactile localization. In the present article, we perform a model-driven re-analysis of these findings with an eye toward estimating the underlying parameters that map sensory input into spatial perception. This re-analysis revealed that users can almost perfectly localize touch on handheld tools. This raises the question of how humans can be so good at localizing touch on an inert noncorporeal object. The remainder of the paper focuses on three aspects of this process that occupied much of our collaboration with Vincent: the mechanical information used by participants for localization; the speed by which the nervous system can transform this information into a spatial percept; and whether body-based computations are repurposed for tool-extended touch. In all, these studies underscore the special relationship between bodies and tools.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-19"},"PeriodicalIF":1.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Positive Attributable Visual Sources Attenuate the Impact of Trigger Sounds in Misophonia. 正归因视觉源减弱恐音症中触发音的影响。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-12 DOI: 10.1163/22134808-bja10137
Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko

Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.

恐音症的特点是对咀嚼、吸吮或呼吸等日常声音产生强烈的负面反应,这些声音会对日常生活产生负面影响。在这里,我们研究了视觉刺激在调节恐音反应中的作用。我们招募了26名错音者和31名健康对照者,并向他们展示了26个声音交换视频:13个触发声音与13个原始视频源(OVS)配对,13个积极归因视觉源(PAVS)配对。我们的研究结果表明,在恐音症和对照组中,PAVS刺激显著增加了与触发声音相关的愉悦感,并降低了与触发声音相关的身体感觉的强度。重要的是,与对照组相比,恐音症患者的身体感觉下降幅度更大。一项对自我报告的身体感觉描述的分析显示,与ovs配对的声音相比,pavs配对的声音导致参与者使用与身体部位有关的单词明显减少。我们还发现,在杜克恐音症问卷(Duke Misophonia Questionnaire, DMQ)症状严重程度量表上得分较高的参与者,其听觉意象得分也较高,而视觉意象与DMQ无关。总的来说,我们的研究结果表明,通过将恐音触发音与pass一起呈现,可以减弱它们的负面影响。
{"title":"Positive Attributable Visual Sources Attenuate the Impact of Trigger Sounds in Misophonia.","authors":"Ghazaleh Mahzouni, Moorea M Welch, Michael Young, Veda Reddy, Patrawat Samermit, Nicolas Davidenko","doi":"10.1163/22134808-bja10137","DOIUrl":"https://doi.org/10.1163/22134808-bja10137","url":null,"abstract":"<p><p>Misophonia is characterized by strong negative reactions to everyday sounds, such as chewing, slurping or breathing, that can have negative consequences for daily life. Here, we investigated the role of visual stimuli in modulating misophonic reactions. We recruited 26 misophonics and 31 healthy controls and presented them with 26 sound-swapped videos: 13 trigger sounds paired with the 13 Original Video Sources (OVS) and with 13 Positive Attributable Visual Sources (PAVS). Our results show that PAVS stimuli significantly increase the pleasantness and reduce the intensity of bodily sensations associated with trigger sounds in both the misophonia and control groups. Importantly, people with misophonia experienced a larger reduction of bodily sensations compared to the control participants. An analysis of self-reported bodily sensation descriptions revealed that PAVS-paired sounds led participants to use significantly fewer words pertaining to body parts compared to the OVS-paired sounds. We also found that participants who scored higher on the Duke Misophonia Questionnaire (DMQ) symptom severity scale had higher auditory imagery scores, yet visual imagery was not associated with the DMQ. Overall, our results show that the negative impact of misophonic trigger sounds can be attenuated by presenting them alongside PAVSs.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"475-498"},"PeriodicalIF":1.8,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Cues Improve the Detection of Synchronized Targets during Human Foraging. 跨模态线索提高了人类觅食过程中对同步目标的检测。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-05 DOI: 10.1163/22134808-bja10135
Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton

In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.

在两个实验中,我们探讨了在一种新的人类觅食范式中,跨模态线索是否可以用来改善对多个目标的觅食。觅食阵列由一个6 × 6的网格组成,其中包含圆周上有一个小点的轮廓圆。每个点从一个随机的起始位置以30°的步骤旋转,顺时针或逆时针,绕圆周旋转。目标被定义为一个同步的旋转速率,在不同的试验中不同,有两组分心物,一组比目标速率旋转得快,一组比目标速率旋转得慢。在实验1中,我们将基线表现与使用非空间听觉线索指示目标旋转速度的条件进行了比较。虽然在这两种情况下,总体觅食速度都很慢,这表明对显示器进行了连续扫描,但听觉线索将目标检测时间减少了两倍。在实验2中,我们复制了听觉线索的优势,同时也表明,传递到手腕的振动触觉脉冲几乎同样有效。有趣的是,在旋转速度的视觉提示中,显示的框架与目标旋转同步改变极性,并没有导致同样的觅食优势。我们的研究结果清楚地表明,如果同步性本身是目标身份的一个决定性特征,那么同步的跨模态线索可以用来改善多目标觅食。
{"title":"Cross-Modal Cues Improve the Detection of Synchronized Targets during Human Foraging.","authors":"Ivan Makarov, Runar Unnthorsson, Árni Kristjánsson, Ian M Thornton","doi":"10.1163/22134808-bja10135","DOIUrl":"https://doi.org/10.1163/22134808-bja10135","url":null,"abstract":"<p><p>In two experiments, we explored whether cross-modal cues can be used to improve foraging for multiple targets in a novel human foraging paradigm. Foraging arrays consisted of a 6 × 6 grid containing outline circles with a small dot on the circumference. Each dot rotated from a random starting location in steps of 30°, either clockwise or counterclockwise, around the circumference. Targets were defined by a synchronized rate of rotation, which varied from trial-to-trial, and there were two distractor sets, one that rotated faster and one that rotated slower than the target rate. In Experiment 1, we compared baseline performance to a condition in which a nonspatial auditory cue was used to indicate the rate of target rotation. While overall foraging speed remained slow in both conditions, suggesting serial scanning of the display, the auditory cue reduced target detection times by a factor of two. In Experiment 2, we replicated the auditory cue advantage, and also showed that a vibrotactile pulse, delivered to the wrist, could be almost as effective. Interestingly, a visual-cue to rotation rate, in which the frame of the display changed polarity in step with target rotation, did not lead to the same foraging advantage. Our results clearly demonstrate that cross-modal cues to synchrony can be used to improve multitarget foraging, provided that synchrony itself is a defining feature of target identity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"457-474"},"PeriodicalIF":1.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Power of Trial History: How Previous Trial Shapes Audiovisual Integration. 审判历史的力量:以前的审判如何塑造视听整合。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-11-01 DOI: 10.1163/22134808-bja10133
Xiaoyu Tang, Wanlong Liu, Yingnan Wu, Rongxia Ren, Jiaying Sun, Jiajia Yang, Aijun Wang, Ming Zhang

Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.

将来自视觉和听觉的信息结合起来形成统一连贯的感知被称为视听整合。视听整合受多种因素的影响。然而,目前尚不清楚审判历史是否会影响视听整合。我们采用目标-目标范式研究了在分模态注意(实验1)和特定模态选择性注意(实验2)条件下,前一实验的目标模态和空间位置对视听整合的影响。实验1中,我们发现与切换位置相比,重复位置的视听整合得到增强。听觉目标后的视听整合程度最大,而视觉和视听目标后的视听整合程度最大。在实验2中,当被试只专注于视觉时,我们发现只有当视听目标在前一个实验中出现时,重复位置实验中的视听整合效应才比切换位置实验中的视听整合效应大。目前的结果提供了第一个证据,证明审判历史可以对视听整合产生影响。讨论了审判历史调节视听集成的机制。未来的视听整合检查应根据试验历史的影响仔细操作实验条件。
{"title":"The Power of Trial History: How Previous Trial Shapes Audiovisual Integration.","authors":"Xiaoyu Tang, Wanlong Liu, Yingnan Wu, Rongxia Ren, Jiaying Sun, Jiajia Yang, Aijun Wang, Ming Zhang","doi":"10.1163/22134808-bja10133","DOIUrl":"https://doi.org/10.1163/22134808-bja10133","url":null,"abstract":"<p><p>Combining information from visual and auditory modalities to form a unified and coherent perception is known as audiovisual integration. Audiovisual integration is affected by many factors. However, it remains unclear whether the trial history can influence audiovisual integration. We used a target-target paradigm to investigate how the target modality and spatial location of the previous trial affect audiovisual integration under conditions of divided-modalities attention (Experiment 1) and modality-specific selective attention (Experiment 2). In Experiment 1, we found that audiovisual integration was enhanced in the repeat locations compared with switch locations. Audiovisual integration was the largest following the auditory targets compared to following the visual and audiovisual targets. In Experiment 2, where participants were asked to focus only on visual, we found that the audiovisual integration effect was larger in the repeat location trials than switch location trials only when the audiovisual target was presented in the previous trial. The present results provide the first evidence that trial history can have an effect on audiovisual integration. The mechanisms of trial history modulating audiovisual integration are discussed. Future examining of audiovisual integration should carefully manipulate experimental conditions based on the effects of trial history.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 6-8","pages":"431-456"},"PeriodicalIF":1.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults. 双语和单语成人对母语和非母语语音的多感官整合。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-10-08 DOI: 10.1163/22134808-bja10132
Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson

Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.

面对面的语言交流是一个视听过程,在这一过程中,对话者既要使用听觉语言信号,也要使用视觉和口腔发音来理解对方。这些感官输入合并成一个统一的过程,称为多感官整合。众所周知,视听语音整合受许多因素的影响,包括听者的经验。在本研究中,我们调查了双语能力和语言经验对整合的作用。我们采用了麦格克范式,向参与者展示不一致的听觉和视觉语音。这包括听觉上的 "ba "和视觉上的 "ga",前者往往会引起 "da "或 "tha "的感知,这是融合效应,是融合的有力证据;而听觉上的 "ga "和视觉上的 "ba",往往会引起 "bga "的感知,这是组合效应,是融合的较弱证据。我们对英语单语者、西班牙语-英语双语者和阿拉伯语-英语双语者三组(每组 20 人)的融合和组合效应进行了比较,并以所有三种语言呈现刺激。在融合效应中,无论刺激语言是什么,单语者的多感官整合能力都明显强于双语者。双语者表现出一种不显著的趋势,即经验越丰富,融合度越高。无论麦格克演示是作为独立的音节还是在真实单词的语境中呈现,这些结果都是成立的。
{"title":"Multisensory Integration of Native and Nonnative Speech in Bilingual and Monolingual Adults.","authors":"Riham Hafez Mohamed, Niloufar Ansari, Bahaa Abdeljawad, Celina Valdivia, Abigail Edwards, Kaitlyn M A Parks, Yassaman Rafat, Ryan A Stevenson","doi":"10.1163/22134808-bja10132","DOIUrl":"10.1163/22134808-bja10132","url":null,"abstract":"<p><p>Face-to-face speech communication is an audiovisual process during which the interlocuters use both the auditory speech signals as well as visual, oral articulations to understand the other. These sensory inputs are merged into a single, unified process known as multisensory integration. Audiovisual speech integration is known to be influenced by many factors, including listener experience. In this study, we investigated the roles of bilingualism and language experience on integration. We used a McGurk paradigm in which participants were presented with incongruent auditory and visual speech. This included an auditory utterance of 'ba' paired with visual articulations of 'ga' that often induce the perception of 'da' or 'tha', a fusion effect that is strong evidence of integration, as well as an auditory utterance of 'ga' paired with visual articulations of 'ba' that often induce the perception of 'bga', a combination effect that is weaker evidence of integration. We compared fusion and combination effects on three groups ( N = 20 each), English monolinguals, Spanish-English bilinguals, and Arabic-English bilinguals, with stimuli presented in all three languages. Monolinguals exhibited significantly stronger multisensory integration than bilinguals in fusion effects, regardless of the stimulus language. Bilinguals exhibited a nonsignificant trend by which greater experience led to increased integration as measured by fusion. These results held regardless of whether McGurk presentations were presented as stand-alone syllables or in the context of real words.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"413-430"},"PeriodicalIF":1.8,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Viewing Distance and Proprioceptive Manipulations on a Virtual Reality Based Balance Test. 观看距离和体感操作对基于虚拟现实的平衡测试的影响
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-08-29 DOI: 10.1163/22134808-bja10131
Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld

Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.

我们保持平衡的能力在日常活动中起着举足轻重的作用。这种能力被认为是包括视觉和本体感觉在内的多种感觉模式相互作用的结果。过去的研究表明,视觉的不同方面,包括相对视觉运动(即头部运动导致的视野感知运动)(可通过改变个体与主要视觉线索之间的观看距离来操控),都会对平衡产生影响。为了解决这个问题,我们进行了一项实验,让 25 名健康的成年人在虚拟体育馆中观看飞镖盘,同时以狭窄的站姿站在坚硬和顺滑的表面上。飞镖靶的距离在 1.5 米、6 米和 24 米三种不同条件下变化,其中包括黑幕条件。我们的研究结果表明,观看距离的增加会导致相对视觉运动的减少,从而降低姿势的稳定性,但只有在本体感觉受到干扰的情况下才会出现这种情况。
{"title":"The Impact of Viewing Distance and Proprioceptive Manipulations on a Virtual Reality Based Balance Test.","authors":"Max Teaford, Zachary J Mularczyk, Alannah Gernon, Daniel M Merfeld","doi":"10.1163/22134808-bja10131","DOIUrl":"10.1163/22134808-bja10131","url":null,"abstract":"<p><p>Our ability to maintain our balance plays a pivotal role in day-to-day activities. This ability is believed to be the result of interactions between several sensory modalities including vision and proprioception. Past research has revealed that different aspects of vision including relative visual motion (i.e., sensed motion of the visual field due to head motion), which can be manipulated by changing the viewing distance between the individual and the predominant visual cues, have an impact on balance. However, only a small number of studies have examined this in the context of virtual reality, and none examined the impact of proprioceptive manipulations for viewing distances greater than 3.5 m. To address this, we conducted an experiment in which 25 healthy adults viewed a dartboard in a virtual gymnasium while standing in narrow stance on firm and compliant surfaces. The dartboard distance varied with three different conditions of 1.5 m, 6 m, and 24 m, including a blacked-out condition. Our results indicate that decreases in relative visual motion, due to an increased viewing distance, yield decreased postural stability - but only with simultaneous proprioceptive disruptions.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"395-412"},"PeriodicalIF":1.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What is the Relation between Chemosensory Perception and Chemosensory Mental Imagery? 化感知觉和化感心理想象之间有什么关系?
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-08-27 DOI: 10.1163/22134808-bja10130
Charles Spence

The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.

由于(例如)嗅觉心理意象的生动程度存在着深刻的个体差异,对化学感觉心理意象的研究无疑变得更加困难。与此同时,大多数试图研究人们味觉(味觉)心理想象能力的研究人员实际上主要是在研究味道心理想象。然而,大量的人类心理物理学研究表明,化学感觉心理意象与化学感觉知觉有许多相似之处。此外,这两个系统还经常被证明是相互影响的,本叙述性历史回顾将从内省、行为、心理物理和认知神经科学等层面探讨人类化感知觉和化感心理意象之间的异同。最新的神经影像学证据表明,化感心理意象所涉及的许多脑区与之前记录的化感知觉所涉及的脑区相同。也就是说,化感心理想象的 "自上而下 "控制与化感知觉的 "自下而上 "控制之间的神经连接模式是相反的。然而,与此同时,仍有许多令人感兴趣的问题,如是否有可能区分正嗅觉心理想象和反嗅觉心理想象,以及大多数人不仅将味道描述为,而且还将其视为食物和饮料的 "味道 "的心理想象在多大程度上能够重新激活人脑中的整个味道网络。
{"title":"What is the Relation between Chemosensory Perception and Chemosensory Mental Imagery?","authors":"Charles Spence","doi":"10.1163/22134808-bja10130","DOIUrl":"https://doi.org/10.1163/22134808-bja10130","url":null,"abstract":"<p><p>The study of chemosensory mental imagery is undoubtedly made more difficult because of the profound individual differences that have been reported in the vividness of (e.g.) olfactory mental imagery. At the same time, the majority of those researchers who have attempted to study people's mental imagery abilities for taste (gustation) have actually mostly been studying flavour mental imagery. Nevertheless, there exists a body of human psychophysical research showing that chemosensory mental imagery exhibits a number of similarities with chemosensory perception. Furthermore, the two systems have frequently been shown to interact with one another, the similarities and differences between chemosensory perception and chemosensory mental imagery at the introspective, behavioural, psychophysical, and cognitive neuroscience levels in humans are considered in this narrative historical review. The latest neuroimaging evidence show that many of the same brain areas are engaged by chemosensory mental imagery as have previously been documented to be involved in chemosensory perception. That said, the pattern of neural connectively is reversed between the 'top-down' control of chemosensory mental imagery and the 'bottom-up' control seen in the case of chemosensory perception. At the same time, however, there remain a number of intriguing questions as to whether it is even possible to distinguish between orthonasal and retronasal olfactory mental imagery, and the extent to which mental imagery for flavour, which most people not only describe as, but also perceive to be, the 'taste' of food and drink, is capable of reactivating the entire flavour network in the human brain.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":" ","pages":"1-30"},"PeriodicalIF":1.8,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS. 通过对左侧 pSTS 的 TMS,证明麦克格克效应与视听言语感知一致的因果关系。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2024-08-16 DOI: 10.1163/22134808-bja10129
EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang

Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.

视觉一致的语音能提高语音感知的准确性,尤其是在嘈杂的环境中。反之,不匹配的视觉语言会改变听到的内容,从而产生不同于听觉和视觉成分的幻觉,这就是所谓的麦格克效应。虽然之前的经颅磁刺激(TMS)和神经影像学研究已确定左后颞上沟(pSTS)是产生麦格克效应的一个因果区域,但目前仍不清楚该区域是只对这种错觉至关重要,还是对视觉语言一致所带来的更普遍的益处(如提高准确性和加快反应速度)也至关重要。事实上,最近的相关研究表明,视觉语言一致的益处和麦格克效应在很大程度上依赖于独立的机制。为了更好地了解视听整合的这些不同特征是如何由左侧 pSTS 因果关系产生的,我们使用单脉冲 TMS 暂时中断该区域的处理,同时向受试者呈现一致或不一致(麦格克)的视听组合。与过去的研究一致,我们观察到,对左侧 pSTS 的 TMS 可降低 McGurk 效应的强度。但重要的是,刺激左侧 pSTS 对一致视听言语的正面益处(提高准确性和加快反应时间)没有影响,这表明这两个过程之间存在因果关系。我们的研究结果与相关模型一致,即 pSTS 只是支持视听言语互动的多个关键区域之一。此外,这些数据为越来越多的证据增添了新的内容,这些证据表明,麦格克效应是一种不完善的替代测量方法,无法测量更普遍的、生态学上有效的视听言语行为。
{"title":"Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS to the Left pSTS.","authors":"EunSeon Ahn, Areti Majumdar, Taraz G Lee, David Brang","doi":"10.1163/22134808-bja10129","DOIUrl":"10.1163/22134808-bja10129","url":null,"abstract":"<p><p>Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept that differs from the auditory and visual components, known as the McGurk effect. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect rely on largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily disrupt processing within this region while subjects were presented with either congruent or incongruent (McGurk) audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation had no effect on the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"37 4-5","pages":"341-363"},"PeriodicalIF":1.8,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1