首页 > 最新文献

Multisensory Research最新文献

英文 中文
Off-Vertical Body Orientation Delays the Perceived Onset of Visual Motion. 身体的非垂直方向延迟了视觉运动的开始。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-03-08 DOI: 10.1163/22134808-bja10095
William Chung, Michael Barnett-Cowan

The integration of vestibular, visual and body cues is a fundamental process in the perception of self-motion and is commonly experienced in an upright posture. However, when the body is tilted in an off-vertical orientation these signals are no longer aligned relative to the influence of gravity. In this study, the perceived timing of visual motion was examined in the presence of sensory conflict introduced by manipulating the orientation of the body, generating a mismatch between body and vestibular cues due to gravity and creating an ambiguous vestibular signal of either head tilt or translation. In a series of temporal-order judgment tasks, participants reported the perceived onset of a visual scene simulating rotation around the yaw axis presented in virtual reality with a paired auditory tone while in an upright, supine and side-recumbent body position. The results revealed that the perceived onset of visual motion was further delayed from zero (i.e., true simultaneity between visual onset and a reference auditory tone) by approximately an additional 30 ms when viewed in a supine or side-recumbent orientation compared to an upright posture. There were also no significant differences in the timing estimates of the visual motion between all the non-upright orientations. This indicates that the perceived timing of visual motion is negatively impacted by the presence of conflict in the vestibular and body signals due to the direction of gravity and body orientation, even when the mismatch is not in the direct plane of the axis of rotation.

前庭、视觉和身体线索的整合是自我运动感知的一个基本过程,通常在直立姿势中经历。然而,当物体在非垂直方向上倾斜时,这些信号不再相对于重力的影响对齐。在本研究中,视觉运动的感知时间被检查了存在感官冲突的情况下,通过操纵身体的方向,产生身体和前庭信号之间的不匹配,由于重力和产生模棱两可的前庭信号,头部倾斜或平移。在一系列的时间顺序判断任务中,参与者报告了在直立、仰卧和侧卧的身体姿势下,在虚拟现实中以成对的听觉音调呈现的模拟绕偏航轴旋转的视觉场景的感知。结果显示,与直立姿势相比,当以仰卧或侧卧的姿势观看时,视觉运动的感知开始从零进一步延迟(即视觉开始和参考听觉音调之间的真实同时性)大约额外30毫秒。在所有非直立方向之间,视觉运动的时间估计也没有显着差异。这表明,即使不匹配不在旋转轴的直接平面上,前庭和身体信号中由于重力方向和身体方向的冲突也会对视觉运动的感知时间产生负面影响。
{"title":"Off-Vertical Body Orientation Delays the Perceived Onset of Visual Motion.","authors":"William Chung,&nbsp;Michael Barnett-Cowan","doi":"10.1163/22134808-bja10095","DOIUrl":"https://doi.org/10.1163/22134808-bja10095","url":null,"abstract":"<p><p>The integration of vestibular, visual and body cues is a fundamental process in the perception of self-motion and is commonly experienced in an upright posture. However, when the body is tilted in an off-vertical orientation these signals are no longer aligned relative to the influence of gravity. In this study, the perceived timing of visual motion was examined in the presence of sensory conflict introduced by manipulating the orientation of the body, generating a mismatch between body and vestibular cues due to gravity and creating an ambiguous vestibular signal of either head tilt or translation. In a series of temporal-order judgment tasks, participants reported the perceived onset of a visual scene simulating rotation around the yaw axis presented in virtual reality with a paired auditory tone while in an upright, supine and side-recumbent body position. The results revealed that the perceived onset of visual motion was further delayed from zero (i.e., true simultaneity between visual onset and a reference auditory tone) by approximately an additional 30 ms when viewed in a supine or side-recumbent orientation compared to an upright posture. There were also no significant differences in the timing estimates of the visual motion between all the non-upright orientations. This indicates that the perceived timing of visual motion is negatively impacted by the presence of conflict in the vestibular and body signals due to the direction of gravity and body orientation, even when the mismatch is not in the direct plane of the axis of rotation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 4","pages":"347-366"},"PeriodicalIF":1.6,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metacognition and Causal Inference in Audiovisual Speech. 视听语音中的元认知与因果推理。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-02-23 DOI: 10.1163/22134808-bja10094
Faith Kimmet, Samantha Pedersen, Victoria Cardenas, Camila Rubiera, Grey Johnson, Addison Sans, Matthew Baldwin, Brian Odegaard

In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research have revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain's decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers' confidence ratings for judgments about Congruent, McGurk, and Rarely Integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence ratings were quite similar across Congruent speech, McGurk speech, and Rarely Integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and Rarely Integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition's ability to distinguish between different sources of multisensory information.

在多感官环境中,我们的大脑进行因果推理,以估计哪些来源产生特定的感官信号。几十年的研究揭示了多感官(视听)信号因果推理过程背后的动力学,包括刺激之间的时间、空间和语义关系如何影响大脑关于是整合还是分离的决定。然而,目前对元认知与多感觉整合之间的关系以及对视听信号的感知置信度特征的研究还很少。在这项调查中,我们提出了两个关于元认知和多感官因果推理之间关系的问题:观察者对一致性、McGurk和很少整合语音的判断的信心评级是相似的,还是不同的?当感知到的音节相同时,信心判断能区分这三种情况吗?为了回答这些问题,92名在线参与者完成了实验,在每次试验中,参与者报告他们感知到的音节,并对他们的判断信心进行评级。实验1的结果表明,在一致性语音、McGurk语音和很少整合语音中,信心评级非常相似。在实验2中,当一致性和McGurk视频的感知音节相匹配时,一致性刺激的信心得分高于McGurk刺激。在实验3中,当McGurk和Rarely Integrated刺激之间匹配感知音节时,两种情况下的信心判断相似。总之,这些结果为元认知区分不同来源的多感官信息的能力和局限性提供了证据。
{"title":"Metacognition and Causal Inference in Audiovisual Speech.","authors":"Faith Kimmet,&nbsp;Samantha Pedersen,&nbsp;Victoria Cardenas,&nbsp;Camila Rubiera,&nbsp;Grey Johnson,&nbsp;Addison Sans,&nbsp;Matthew Baldwin,&nbsp;Brian Odegaard","doi":"10.1163/22134808-bja10094","DOIUrl":"https://doi.org/10.1163/22134808-bja10094","url":null,"abstract":"<p><p>In multisensory environments, our brains perform causal inference to estimate which sources produce specific sensory signals. Decades of research have revealed the dynamics which underlie this process of causal inference for multisensory (audiovisual) signals, including how temporal, spatial, and semantic relationships between stimuli influence the brain's decision about whether to integrate or segregate. However, presently, very little is known about the relationship between metacognition and multisensory integration, and the characteristics of perceptual confidence for audiovisual signals. In this investigation, we ask two questions about the relationship between metacognition and multisensory causal inference: are observers' confidence ratings for judgments about Congruent, McGurk, and Rarely Integrated speech similar, or different? And do confidence judgments distinguish between these three scenarios when the perceived syllable is identical? To answer these questions, 92 online participants completed experiments where on each trial, participants reported which syllable they perceived, and rated confidence in their judgment. Results from Experiment 1 showed that confidence ratings were quite similar across Congruent speech, McGurk speech, and Rarely Integrated speech. In Experiment 2, when the perceived syllable for congruent and McGurk videos was matched, confidence scores were higher for congruent stimuli compared to McGurk stimuli. In Experiment 3, when the perceived syllable was matched between McGurk and Rarely Integrated stimuli, confidence judgments were similar between the two conditions. Together, these results provide evidence of the capacities and limitations of metacognition's ability to distinguish between different sources of multisensory information.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"289-311"},"PeriodicalIF":1.6,"publicationDate":"2023-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9790066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Assessing the Effects of Exercise, Cognitive Demand, and Rest on Audiovisual Multisensory Processing in Older Adults: A Pilot Study. 评估运动、认知需求和休息对老年人视听多感觉加工的影响:一项初步研究。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-24 DOI: 10.1163/22134808-bja10085
Aysha Basharat, Michael Barnett-Cowan

A single bout of aerobic exercise is related to positive changes in higher-order cognitive function among older adults; however, the impact of aerobic exercise on multisensory processing remains unclear. Here we assessed the effects of a single bout of aerobic exercise on commonly utilized tasks that measure audiovisual multisensory processing: response time (RT), simultaneity judgements (SJ), and temporal-order judgements (TOJ), in a pilot study. To our knowledge this is the first effort to investigate the effects of three well-controlled intervention conditions on multisensory processing: resting, completing a cognitively demanding task, and performing aerobic exercise for 20 minutes. Our results indicate that the window of time within which stimuli from different modalities are integrated and perceived as simultaneous (temporal binding window; TBW) is malleable and changes after each intervention condition for both the SJ and TOJ tasks. Specifically, the TBW consistently became narrower post exercise while consistently increasing in width post rest, suggesting that aerobic exercise may improve temporal perception precision via broad neural change rather than targeting the specific networks that subserve either the SJ or TOJ tasks individually. The results from the RT task further support our findings of malleability of the multisensory processing system, as changes in performance, as assessed through cumulative probability models, were observed after each intervention condition. An increase in integration (i.e., greater magnitude of multisensory effect) however, was only found after a single bout of aerobic exercise. Overall, our results indicate that exercise uniquely affects the central nervous system and may broadly affect multisensory processing.

单次有氧运动与老年人高阶认知功能的积极变化有关;然而,有氧运动对多感觉加工的影响尚不清楚。在一项初步研究中,我们评估了单次有氧运动对测量视听多感官处理的常用任务的影响:反应时间(RT)、同时性判断(SJ)和时间顺序判断(TOJ)。据我们所知,这是第一次研究三种控制良好的干预条件对多感觉处理的影响:休息,完成一项认知要求高的任务,进行20分钟的有氧运动。我们的研究结果表明,来自不同模式的刺激被整合和感知为同时的时间窗口(时间绑定窗口;TBW具有延展性,在SJ和TOJ任务的每个干预条件后都会发生变化。具体来说,TBW在运动后不断变窄,而在休息后宽度不断增加,这表明有氧运动可能通过广泛的神经变化而不是针对单独服务于SJ或TOJ任务的特定网络来提高时间感知精度。RT任务的结果进一步支持了我们关于多感觉处理系统可塑性的发现,因为通过累积概率模型评估,在每个干预条件下观察到性能的变化。然而,只有在一次有氧运动后,才会发现整合能力的增强(即多感官效应的增强)。总的来说,我们的研究结果表明,运动独特地影响中枢神经系统,并可能广泛地影响多感觉处理。
{"title":"Assessing the Effects of Exercise, Cognitive Demand, and Rest on Audiovisual Multisensory Processing in Older Adults: A Pilot Study.","authors":"Aysha Basharat,&nbsp;Michael Barnett-Cowan","doi":"10.1163/22134808-bja10085","DOIUrl":"https://doi.org/10.1163/22134808-bja10085","url":null,"abstract":"<p><p>A single bout of aerobic exercise is related to positive changes in higher-order cognitive function among older adults; however, the impact of aerobic exercise on multisensory processing remains unclear. Here we assessed the effects of a single bout of aerobic exercise on commonly utilized tasks that measure audiovisual multisensory processing: response time (RT), simultaneity judgements (SJ), and temporal-order judgements (TOJ), in a pilot study. To our knowledge this is the first effort to investigate the effects of three well-controlled intervention conditions on multisensory processing: resting, completing a cognitively demanding task, and performing aerobic exercise for 20 minutes. Our results indicate that the window of time within which stimuli from different modalities are integrated and perceived as simultaneous (temporal binding window; TBW) is malleable and changes after each intervention condition for both the SJ and TOJ tasks. Specifically, the TBW consistently became narrower post exercise while consistently increasing in width post rest, suggesting that aerobic exercise may improve temporal perception precision via broad neural change rather than targeting the specific networks that subserve either the SJ or TOJ tasks individually. The results from the RT task further support our findings of malleability of the multisensory processing system, as changes in performance, as assessed through cumulative probability models, were observed after each intervention condition. An increase in integration (i.e., greater magnitude of multisensory effect) however, was only found after a single bout of aerobic exercise. Overall, our results indicate that exercise uniquely affects the central nervous system and may broadly affect multisensory processing.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"213-262"},"PeriodicalIF":1.6,"publicationDate":"2023-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9382058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Neural Correlates of Audiovisual Speech Processing in Autistic and Non-Autistic Youth. 自闭症与非自闭症青少年视听言语加工的神经关联。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-19 DOI: 10.1163/22134808-bja10093
Kacie Dunham, Alisa Zoltowski, Jacob I Feldman, Samona Davis, Baxter Rogers, Michelle D Failla, Mark T Wallace, Carissa J Cascio, Tiffany G Woynaroski

Autistic youth demonstrate differences in processing multisensory information, particularly in temporal processing of multisensory speech. Extensive research has identified several key brain regions for multisensory speech processing in non-autistic adults, including the superior temporal sulcus (STS) and insula, but it is unclear to what extent these regions are involved in temporal processing of multisensory speech in autistic youth. As a first step in exploring the neural substrates of multisensory temporal processing in this clinical population, we employed functional magnetic resonance imaging (fMRI) with a simultaneity-judgment audiovisual speech task. Eighteen autistic youth and a comparison group of 20 non-autistic youth matched on chronological age, biological sex, and gender participated. Results extend prior findings from studies of non-autistic adults, with non-autistic youth demonstrating responses in several similar regions as previously implicated in adult temporal processing of multisensory speech. Autistic youth demonstrated responses in fewer of the multisensory regions identified in adult studies; responses were limited to visual and motor cortices. Group responses in the middle temporal gyrus significantly interacted with age; younger autistic individuals showed reduced MTG responses whereas older individuals showed comparable MTG responses relative to non-autistic controls. Across groups, responses in the precuneus covaried with task accuracy, and anterior temporal and insula responses covaried with nonverbal IQ. These preliminary findings suggest possible differences in neural mechanisms of audiovisual processing in autistic youth while highlighting the need to consider participant characteristics in future, larger-scale studies exploring the neural basis of multisensory function in autism.

自闭症青少年在处理多感觉信息方面表现出差异,特别是在多感觉言语的时间处理方面。广泛的研究已经确定了非自闭症成年人多感觉语言处理的几个关键大脑区域,包括颞上沟(STS)和脑岛,但尚不清楚这些区域在多大程度上参与自闭症青少年多感觉语言的时间处理。作为在临床人群中探索多感觉时间加工的神经基础的第一步,我们使用功能磁共振成像(fMRI)进行同时判断视听语音任务。18名自闭症青少年和20名非自闭症青少年的对照组在实际年龄、生理性别和性别上都相匹配。结果扩展了先前对非自闭症成年人的研究结果,非自闭症青少年在几个类似的区域表现出反应,这些区域与先前涉及的成人多感觉语言的时间处理有关。自闭症青少年在成人研究中发现的多感觉区域中表现出较少的反应;反应仅限于视觉和运动皮层。颞中回组反应与年龄显著相关;年轻的自闭症患者的MTG反应减少,而老年患者的MTG反应与非自闭症对照组相当。在不同的小组中,楔前叶的反应与任务准确性相关,前叶和脑岛的反应与非语言智商相关。这些初步发现提示了自闭症青少年视听加工的神经机制可能存在差异,同时强调了在未来更大规模的研究中考虑参与者特征,探索自闭症多感觉功能的神经基础的必要性。
{"title":"Neural Correlates of Audiovisual Speech Processing in Autistic and Non-Autistic Youth.","authors":"Kacie Dunham,&nbsp;Alisa Zoltowski,&nbsp;Jacob I Feldman,&nbsp;Samona Davis,&nbsp;Baxter Rogers,&nbsp;Michelle D Failla,&nbsp;Mark T Wallace,&nbsp;Carissa J Cascio,&nbsp;Tiffany G Woynaroski","doi":"10.1163/22134808-bja10093","DOIUrl":"https://doi.org/10.1163/22134808-bja10093","url":null,"abstract":"<p><p>Autistic youth demonstrate differences in processing multisensory information, particularly in temporal processing of multisensory speech. Extensive research has identified several key brain regions for multisensory speech processing in non-autistic adults, including the superior temporal sulcus (STS) and insula, but it is unclear to what extent these regions are involved in temporal processing of multisensory speech in autistic youth. As a first step in exploring the neural substrates of multisensory temporal processing in this clinical population, we employed functional magnetic resonance imaging (fMRI) with a simultaneity-judgment audiovisual speech task. Eighteen autistic youth and a comparison group of 20 non-autistic youth matched on chronological age, biological sex, and gender participated. Results extend prior findings from studies of non-autistic adults, with non-autistic youth demonstrating responses in several similar regions as previously implicated in adult temporal processing of multisensory speech. Autistic youth demonstrated responses in fewer of the multisensory regions identified in adult studies; responses were limited to visual and motor cortices. Group responses in the middle temporal gyrus significantly interacted with age; younger autistic individuals showed reduced MTG responses whereas older individuals showed comparable MTG responses relative to non-autistic controls. Across groups, responses in the precuneus covaried with task accuracy, and anterior temporal and insula responses covaried with nonverbal IQ. These preliminary findings suggest possible differences in neural mechanisms of audiovisual processing in autistic youth while highlighting the need to consider participant characteristics in future, larger-scale studies exploring the neural basis of multisensory function in autism.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 3","pages":"263-288"},"PeriodicalIF":1.6,"publicationDate":"2023-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10121891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9382061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio-Visual Interference During Motion Discrimination in Starlings. 欧椋鸟运动识别过程中的视听干扰。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-17 DOI: 10.1163/22134808-bja10092
Gesa Feenders, Georg M Klump

Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.

动作辨别对动物避免碰撞、躲避捕食者、捕捉猎物或交流至关重要。虽然大多数陆生脊椎动物可以通过结合声音和视觉的并发刺激来获得对运动物体的最显著感知,但对这种跨模态运动识别的机制研究很少。我们用欧洲椋鸟作为模型,对其视觉和听觉系统进行了充分的研究。在一个具有视觉和听觉刺激的行为运动辨别任务中,我们研究了跨模态干扰和注意过程的影响。我们的研究结果表明,当视觉和听觉刺激在相反的方向移动时,与一致的运动方向相比,运动识别功能受损。通过呈现持续时间很短的声刺激,从而缺乏定向运动信息,声刺激的额外警报效果变得明显。最后,我们表明,与在主要警报效应的情况下预期的同步呈现刺激相比,暂时领先的声刺激并没有改善反应行为。这进一步支持了一致性和同步性在当前测试范式中的重要性,而声刺激引起的注意过程的作用较小。总之,我们的数据清楚地表明,在符合已知的跨模态绑定标准的参数条件下,仔细选择现实生活中的刺激时,视听运动鉴别范式中的跨模态干扰效应。
{"title":"Audio-Visual Interference During Motion Discrimination in Starlings.","authors":"Gesa Feenders,&nbsp;Georg M Klump","doi":"10.1163/22134808-bja10092","DOIUrl":"https://doi.org/10.1163/22134808-bja10092","url":null,"abstract":"<p><p>Motion discrimination is essential for animals to avoid collisions, to escape from predators, to catch prey or to communicate. Although most terrestrial vertebrates can benefit by combining concurrent stimuli from sound and vision to obtain a most salient percept of the moving object, there is little research on the mechanisms involved in such cross-modal motion discrimination. We used European starlings as a model with a well-studied visual and auditory system. In a behavioural motion discrimination task with visual and acoustic stimuli, we investigated the effects of cross-modal interference and attentional processes. Our results showed an impairment of motion discrimination when the visual and acoustic stimuli moved in opposite directions as compared to congruent motion direction. By presenting an acoustic stimulus of very short duration, thus lacking directional motion information, an additional alerting effect of the acoustic stimulus became evident. Finally, we show that a temporally leading acoustic stimulus did not improve the response behaviour compared to the synchronous presentation of the stimuli as would have been expected in case of major alerting effects. This further supports the importance of congruency and synchronicity in the current test paradigm with a minor role of attentional processes elicited by the acoustic stimulus. Together, our data clearly show cross-modal interference effects in an audio-visual motion discrimination paradigm when carefully selecting real-life stimuli under parameter conditions that meet the known criteria for cross-modal binding.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"181-212"},"PeriodicalIF":1.6,"publicationDate":"2023-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10834687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can We Train Multisensory Integration in Adults? A Systematic Review. 我们能训练成人的多感觉统合吗?系统评价。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-13 DOI: 10.1163/22134808-bja10090
Jessica O'Brien, Amy Mason, Jason Chan, Annalisa Setti

The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.

有效地结合来自不同感官的信息的能力是一个重要的感知过程,它支撑着我们的日常活动。这一过程被称为多感觉统合,因人而异,并受到衰老过程的影响,与年龄相关的疾病(包括平衡困难、轻度认知障碍和认知衰退)相关的处理受损。多感觉知觉受损也与一系列神经发育状况有关,因此人们正在积极寻求新的干预方法,例如阅读障碍和自闭症。然而,目前还不清楚多感官知觉可以通过训练改变到什么程度以及如何改变。本系统综述的目的是评估证据,我们可以训练多感官知觉在神经正常的成年人。通过对PubMed、Scopus、PsychInfo和Web of Science等数据库的系统搜索,总共确定了1521项研究。在筛选纳入和排除标准后,选择了27项研究纳入。采用非随机研究方法学指数(Methodological Index for Non-Randomised Studies,简称:minor)工具和Cochrane Risk of Bias工具2.0随机对照试验评估研究质量。我们发现大量证据表明,使用心理物理学协议的任务内反馈训练可以提高任务绩效。这种训练对其他多感觉统合任务的普遍性尚无定论,报道的研究和结果不一。基于运动的训练有希望的发现表明,体育活动方案值得进一步研究,作为改善多感觉整合的潜在训练途径。未来的研究方向应该包括在临床人群和其他群体中试验训练方案,这些人群将受益于有针对性的训练,以改善低效的多感觉整合。
{"title":"Can We Train Multisensory Integration in Adults? A Systematic Review.","authors":"Jessica O'Brien, Amy Mason, Jason Chan, Annalisa Setti","doi":"10.1163/22134808-bja10090","DOIUrl":"10.1163/22134808-bja10090","url":null,"abstract":"<p><p>The ability to efficiently combine information from different senses is an important perceptual process that underpins much of our daily activities. This process, known as multisensory integration, varies from individual to individual, and is affected by the ageing process, with impaired processing associated with age-related conditions, including balance difficulties, mild cognitive impairment and cognitive decline. Impaired multisensory perception has also been associated with a range of neurodevelopmental conditions, where novel intervention approaches are actively sought, for example dyslexia and autism. However, it remains unclear to what extent and how multisensory perception can be modified by training. This systematic review aims to evaluate the evidence that we can train multisensory perception in neurotypical adults. In all, 1521 studies were identified following a systematic search of the databases PubMed, Scopus, PsychInfo and Web of Science. Following screening for inclusion and exclusion criteria, 27 studies were chosen for inclusion. Study quality was assessed using the Methodological Index for Non-Randomised Studies (MINORS) tool and the Cochrane Risk of Bias tool 2.0 for Randomised Control Trials. We found considerable evidence that in-task feedback training using psychophysics protocols led to improved task performance. The generalisability of this training to other tasks of multisensory integration was inconclusive, with few studies and mixed findings reported. Promising findings from exercise-based training indicate physical activity protocols warrant further investigation as potential training avenues for improving multisensory integration. Future research directions should include trialling training protocols with clinical populations and other groups who would benefit from targeted training to improve inefficient multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 2","pages":"111-180"},"PeriodicalIF":1.8,"publicationDate":"2023-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10835145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front matter 前页
4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-01-11 DOI: 10.1163/22134808-00351p14
{"title":"Front matter","authors":"","doi":"10.1163/22134808-00351p14","DOIUrl":"https://doi.org/10.1163/22134808-00351p14","url":null,"abstract":"","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136082543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
'Tasting Imagination': What Role Chemosensory Mental Imagery in Multisensory Flavour Perception? “味觉想象”:化学感觉心理意象在多感官味觉中的作用?
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-30 DOI: 10.1163/22134808-bja10091
Charles Spence

A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.

在嗅觉/味道感知领域,许多令人困惑的现象可以通过这样的建议得到有效的解释:化学感觉心理意象可以由感知输入自动触发。特别是,在化学感觉心理物理学研究中,参与者在混合物中区分两到三种以上气味的能力似乎有限,而葡萄酒专家有时会报告丰富而详细的味道描述,这两者之间的脱节;许多老年人缺乏对化学感觉丧失的认识;以及气味诱导的味觉增强(OITE)效应对嗅觉刺激的呈现方式(即正鼻或后鼻)不敏感。这里提出的建议是,首先在视觉模态中发展起来的预测编码理论,可以扩展到化学感觉。这可能提供了一种富有成效的方式来思考在香气和味道的体验中心理意象和感知之间的相互作用。接受这样的建议也提出了一些重要的问题,涉及到迄今为止发表的许多化学感觉心理物理学文献的生态有效性/意义。
{"title":"'Tasting Imagination': What Role Chemosensory Mental Imagery in Multisensory Flavour Perception?","authors":"Charles Spence","doi":"10.1163/22134808-bja10091","DOIUrl":"https://doi.org/10.1163/22134808-bja10091","url":null,"abstract":"<p><p>A number of perplexing phenomena in the area of olfactory/flavour perception may fruitfully be explained by the suggestion that chemosensory mental imagery can be triggered automatically by perceptual inputs. In particular, the disconnect between the seemingly limited ability of participants in chemosensory psychophysics studies to distinguish more than two or three odorants in mixtures and the rich and detailed flavour descriptions that are sometimes reported by wine experts; the absence of awareness of chemosensory loss in many elderly individuals; and the insensitivity of the odour-induced taste enhancement (OITE) effect to the mode of presentation of olfactory stimuli (i.e., orthonasal or retronasal). The suggestion made here is that the theory of predictive coding, developed first in the visual modality, be extended to chemosensation. This may provide a fruitful way of thinking about the interaction between mental imagery and perception in the experience of aromas and flavours. Accepting such a suggestion also raises some important questions concerning the ecological validity/meaning of much of the chemosensory psychophysics literature that has been published to date.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"93-109"},"PeriodicalIF":1.6,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum. 唱歌对自闭症儿童视觉和多感官言语感知的影响
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-30 DOI: 10.1163/22134808-bja10087
Jacob I Feldman, Alexander Tu, Julie G Conrad, Wayne Kuang, Pooja Santapuram, Tiffany G Woynaroski

Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7-14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory 'ba' and visual 'ga'). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., 'ba', 'ga', 'da', or 'tha'). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.

自闭症儿童在应对麦格克幻觉时,对视听语音刺激的多感官整合能力会有所下降。此前已有研究表明,成人可以整合唱出的麦格克标记。与口语令牌相比,这些唱出的语音令牌提供了更突出的视觉和听觉线索,这可能会提高自闭症儿童对视觉语音线索的识别和整合能力。40 名 7-14 岁的参与者(20 名自闭症儿童,20 名非自闭症儿童)完成了这项研究。研究人员以四种方式向参与者展示语音标记:纯听觉、纯视觉、一致视听和不一致视听(即麦克格克;听觉 "ba "和视觉 "ga")。代币还以两种形式呈现:口语和歌唱。参与者通过四键反应框(即 "ba"、"ga"、"da "或 "tha")来表示他们的感知。我们计算了每种模式和形式的准确度和对麦格克幻觉的感知。对纯视觉识别的分析表明,形式具有显著的主效应,即参与者在唱歌和说话的试验中更准确,但组别和交互效应没有显著的主效应。对麦格克试验的分析表明,形式或组别没有明显的主效应,也没有明显的交互效应。唱词令牌提高了视觉语音线索的识别能力,但并没有促进各组视觉线索与听力语音的整合。我们还需要做更多的工作来确定口语语音的哪些特性有助于提高视觉准确性,并评估更长时间地接触歌唱语音是否会对多感官整合产生影响。
{"title":"The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum.","authors":"Jacob I Feldman, Alexander Tu, Julie G Conrad, Wayne Kuang, Pooja Santapuram, Tiffany G Woynaroski","doi":"10.1163/22134808-bja10087","DOIUrl":"10.1163/22134808-bja10087","url":null,"abstract":"<p><p>Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7-14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory 'ba' and visual 'ga'). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., 'ba', 'ga', 'da', or 'tha'). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"57-74"},"PeriodicalIF":1.8,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9924934/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crossmodal Texture Perception Is Illumination-Dependent. 交叉模态纹理感知依赖于光照。
IF 1.6 4区 心理学 Q3 BIOPHYSICS Pub Date : 2022-12-28 DOI: 10.1163/22134808-bja10089
Karina Kangur, Martin Giesel, Julie M Harris, Constanze Hesse

Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.

三维纹理的视觉感知粗糙度随光照方向的变化而变化。当照明角度降低时,表面会显得粗糙,从而导致粗糙度不恒定。在这里,我们的目的是研究视觉系统在判断交叉模态匹配任务中的粗糙度时是否也依赖于光照依赖的特征,或者它是否可以访问也可以由触觉系统评估的光照不变的表面特征。参与者(N = 32)在两种不同的照明条件下(顶角和斜角)实际或视觉上探索了中等物理粗糙度的砂纸。随后,他们必须判断比较刺激(不同的物理粗糙度)是否与先前探索的标准相匹配。匹配要么使用与探索期间相同的模态(模态内),要么使用不同的模态(跨模态)。在模态内条件下,参与者的表现与模态或照明无关。在交叉模态条件下,被试在斜视光照条件下比在顶光条件下选择更粗糙的触觉匹配。相反,经过触觉探索,他们在倾斜照明下比在顶部照明下选择更平滑的视觉匹配。这些发现证实,视觉粗糙度感知取决于光照方向,并首次表明,这种粗糙度恒定的失败也转移到交叉模态的判断上。
{"title":"Crossmodal Texture Perception Is Illumination-Dependent.","authors":"Karina Kangur,&nbsp;Martin Giesel,&nbsp;Julie M Harris,&nbsp;Constanze Hesse","doi":"10.1163/22134808-bja10089","DOIUrl":"https://doi.org/10.1163/22134808-bja10089","url":null,"abstract":"<p><p>Visually perceived roughness of 3D textures varies with illumination direction. Surfaces appear rougher when the illumination angle is lowered resulting in a lack of roughness constancy. Here we aimed to investigate whether the visual system also relies on illumination-dependent features when judging roughness in a crossmodal matching task or whether it can access illumination-invariant surface features that can also be evaluated by the tactile system. Participants ( N = 32) explored an abrasive paper of medium physical roughness either tactually, or visually under two different illumination conditions (top vs oblique angle). Subsequently, they had to judge if a comparison stimulus (varying in physical roughness) matched the previously explored standard. Matching was either performed using the same modality as during exploration (intramodal) or using a different modality (crossmodal). In the intramodal conditions, participants performed equally well independent of the modality or illumination employed. In the crossmodal conditions, participants selected rougher tactile matches after exploring the standard visually under oblique illumination than under top illumination. Conversely, after tactile exploration, they selected smoother visual matches under oblique than under top illumination. These findings confirm that visual roughness perception depends on illumination direction and show, for the first time, that this failure of roughness constancy also transfers to judgements made crossmodally.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":"36 1","pages":"75-91"},"PeriodicalIF":1.6,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10707537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1