首页 > 最新文献

Multisensory Research最新文献

英文 中文
Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial. 感知到的视听同时性会被前一个试验的视觉强度重新校准。
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-04-30 DOI: 10.1163/22134808-bja10121
Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.

在判断视听信号是否来自同一事件时,一个重要的启发式方法就是各信号的时间重合性。以往的研究强调了一个过程,即同时性感知会迅速重新校准,以考虑刺激物物理时间偏移的差异。本文研究了快速重新校准是否也会因中心到达潜伏期的不同而发生,而中心到达潜伏期是由视觉强度相关的处理时间驱动的。在一项行为实验中,观察者完成了时序判断(TOJ)、同时性判断(SJ)和简单反应时间(RT)任务,并对前面带有明亮或昏暗视觉刺激的其他视听试验做出了反应。结果发现,在 TOJ 任务中,主观同时性点会因前面刺激的视觉强度而移动,但在 SJ 任务中不会,而在 RT 数据中,前面刺激的强度没有影响。因此,我们的数据提供了一些证据,证明同时性感知会根据刺激强度迅速重新校准。
{"title":"Perceived Audio-Visual Simultaneity Is Recalibrated by the Visual Intensity of the Preceding Trial.","authors":"Ryan Horsfall, Neil Harrison, Georg Meyer, Sophie Wuerger","doi":"10.1163/22134808-bja10121","DOIUrl":"https://doi.org/10.1163/22134808-bja10121","url":null,"abstract":"<p><p>A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile Landmarks: the Relative Landmark Location Alters Spatial Distortions 触觉地标:相对地标位置改变空间扭曲
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-04-25 DOI: 10.1163/22134808-bja10122
Paula Soballa, Christian Frings, Simon Merz
The influence of landmarks, that is, nearby non-target stimuli, on spatial perception has been shown in multiple ways. These include altered target localization variability near landmarks and systematic spatial distortions of target localizations. Previous studies have mostly been conducted in the visual modality using temporary, artificial landmarks or the tactile modality with persistent landmarks on the body. Thus, it is unclear whether both landmark types produce the same spatial distortions as they were never investigated in the same modality. Addressing this, we used a novel tactile setup to present temporary, artificial landmarks on the forearm and systematically manipulated their location to either be close to a persistent landmark (wrist or elbow) or in between both persistent landmarks at the middle of the forearm. Initial data (Exp. 1 and Exp. 2) suggested systematic differences of temporary landmarks based on their distance from the persistent landmark, possibly indicating different distortions of temporary and persistent landmarks. Subsequent control studies (Exp. 3 and Exp. 4) showed this effect was driven by the relative landmark location within the target distribution. Specifically, landmarks in the middle of the target distribution led to systematic distortions of target localizations toward the landmark, whereas landmarks at the side led to distortions away from the landmark for nearby targets, and toward the landmark with wider distances. Our results indicate that experimental results with temporary landmarks can be generalized to more natural settings with persistent landmarks, and further reveal that the relative landmark location leads to different effects of the pattern of spatial distortions.
地标(即附近的非目标刺激物)对空间感知的影响表现在多个方面。这包括地标附近目标定位变异性的改变和目标定位的系统性空间扭曲。以往的研究大多是在视觉模式下使用临时的人造地标,或在触觉模式下使用身体上的持久性地标。因此,目前还不清楚这两种地标类型是否会产生相同的空间扭曲,因为它们从未在同一种模式下进行过研究。为了解决这个问题,我们使用了一种新颖的触觉装置,在前臂上显示临时的人造地标,并系统地操纵它们的位置,使其要么靠近持久性地标(手腕或肘部),要么位于前臂中部的两个持久性地标之间。最初的数据(实验 1 和实验 2)表明,临时地标的系统差异取决于它们与持久地标的距离,这可能表明临时地标和持久地标的扭曲程度不同。随后的对照研究(实验 3 和实验 4)表明,这种效应是由目标分布中的相对地标位置驱动的。具体来说,位于目标分布中间的地标会导致目标定位向地标方向的系统性失真,而位于边上的地标则会导致附近目标向远离地标的方向失真,而距离较远的目标则会向地标的方向失真。我们的结果表明,使用临时地标的实验结果可以推广到使用持久地标的更自然的环境中,并进一步揭示了相对地标位置对空间扭曲模式的不同影响。
{"title":"Tactile Landmarks: the Relative Landmark Location Alters Spatial Distortions","authors":"Paula Soballa, Christian Frings, Simon Merz","doi":"10.1163/22134808-bja10122","DOIUrl":"https://doi.org/10.1163/22134808-bja10122","url":null,"abstract":"\u0000The influence of landmarks, that is, nearby non-target stimuli, on spatial perception has been shown in multiple ways. These include altered target localization variability near landmarks and systematic spatial distortions of target localizations. Previous studies have mostly been conducted in the visual modality using temporary, artificial landmarks or the tactile modality with persistent landmarks on the body. Thus, it is unclear whether both landmark types produce the same spatial distortions as they were never investigated in the same modality. Addressing this, we used a novel tactile setup to present temporary, artificial landmarks on the forearm and systematically manipulated their location to either be close to a persistent landmark (wrist or elbow) or in between both persistent landmarks at the middle of the forearm. Initial data (Exp. 1 and Exp. 2) suggested systematic differences of temporary landmarks based on their distance from the persistent landmark, possibly indicating different distortions of temporary and persistent landmarks. Subsequent control studies (Exp. 3 and Exp. 4) showed this effect was driven by the relative landmark location within the target distribution. Specifically, landmarks in the middle of the target distribution led to systematic distortions of target localizations toward the landmark, whereas landmarks at the side led to distortions away from the landmark for nearby targets, and toward the landmark with wider distances. Our results indicate that experimental results with temporary landmarks can be generalized to more natural settings with persistent landmarks, and further reveal that the relative landmark location leads to different effects of the pattern of spatial distortions.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Four-Stroke Apparent Motion Can Effectively Induce Visual Self-Motion Perception: an Examination Using Expanding, Rotating, and Translating Motion 四冲程表观运动能有效诱发视觉自我运动感知:利用膨胀、旋转和平移运动进行的研究
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-04-24 DOI: 10.1163/22134808-bja10120
Shinji Nakamura
The current investigation examined whether visual motion without continuous visual displacement could effectively induce self-motion perception (vection). Four-stroke apparent motions (4SAM) were employed in the experiments as visual inducers. The 4SAM pattern contained luminance-defined motion energy equivalent to the real motion pattern, and the participants perceived unidirectional motion according to the motion energy but without displacements (the visual elements flickered on the spot). The experiments revealed that the 4SAM stimulus could effectively induce vection in the horizontal, expanding, or rotational directions, although its strength was significantly weaker than that induced by the real-motion stimulus. This result suggests that visual displacement is not essential, and the luminance-defined motion energy and/or the resulting perceived motion of the visual inducer would be sufficient for inducing visual self-motion perception. Conversely, when the 4SAM and real-motion patterns were presented simultaneously, self-motion perception was mainly determined in accordance with real motion, suggesting that the real-motion stimulus is a predominant determinant of vection. These research outcomes may be worthy of considering the perceptual and neurological mechanisms underlying self-motion perception.
本研究探讨了没有连续视觉位移的视觉运动是否能有效诱导自我运动感知(vection)。实验中使用了四行程视运动(4SAM)作为视觉诱导物。4SAM 图案包含与真实运动图案等效的亮度定义的运动能量,被试根据运动能量感知单向运动,但没有位移(视觉元素在原地闪烁)。实验结果表明,4SAM 刺激可以有效地诱导水平、扩张或旋转方向上的偏转,但其强度明显弱于真实运动刺激所诱导的偏转。这一结果表明,视觉位移并不是必要条件,亮度定义的运动能量和/或视觉诱导物产生的感知运动足以诱导视觉自运动感知。相反,当 4SAM 和真实运动模式同时呈现时,自我运动感知主要是根据真实运动来决定的,这表明真实运动刺激是决定视向量的主要因素。这些研究成果可能值得我们思考自我运动感知的知觉和神经机制。
{"title":"Four-Stroke Apparent Motion Can Effectively Induce Visual Self-Motion Perception: an Examination Using Expanding, Rotating, and Translating Motion","authors":"Shinji Nakamura","doi":"10.1163/22134808-bja10120","DOIUrl":"https://doi.org/10.1163/22134808-bja10120","url":null,"abstract":"\u0000The current investigation examined whether visual motion without continuous visual displacement could effectively induce self-motion perception (vection). Four-stroke apparent motions (4SAM) were employed in the experiments as visual inducers. The 4SAM pattern contained luminance-defined motion energy equivalent to the real motion pattern, and the participants perceived unidirectional motion according to the motion energy but without displacements (the visual elements flickered on the spot). The experiments revealed that the 4SAM stimulus could effectively induce vection in the horizontal, expanding, or rotational directions, although its strength was significantly weaker than that induced by the real-motion stimulus. This result suggests that visual displacement is not essential, and the luminance-defined motion energy and/or the resulting perceived motion of the visual inducer would be sufficient for inducing visual self-motion perception. Conversely, when the 4SAM and real-motion patterns were presented simultaneously, self-motion perception was mainly determined in accordance with real motion, suggesting that the real-motion stimulus is a predominant determinant of vection. These research outcomes may be worthy of considering the perceptual and neurological mechanisms underlying self-motion perception.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140661460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Multimodal Trust Effects of Face, Voice, and Sentence Content 人脸、声音和句子内容的多模态信任效应
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-04-03 DOI: 10.1163/22134808-bja10119
Isar Syed, M. Baart, Jean Vroomen
Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.
信任是人类社会交往的一个重要方面,研究发现了许多有助于同化这一社会特征的线索。其中两个线索是声音的音调和脸部的宽高比(fWHR)。此外,研究还表明,口语句子的内容本身对可信度也有影响;这一发现尚未被引入多感官研究。目前的研究旨在调查之前提出的信任理论与多模态环境中声调、fWHR 和句子内容的关系。研究人员要求 26 名女性受试者在看到一张脸的同时,判断说出中性或浪漫句子的声音的可信度。声音的平均音高和 fWHR 均有系统地变化。结果表明,口语信息的内容是预测多模态可信度的重要因素。此外,在多模态环境中,语音的平均音高和面部的 fWHR 似乎是有用的指标。这些效应在不同模态中相互影响。这些数据表明,与任务无关的视觉刺激会影响对声音的信任。我们鼓励未来的研究澄清这些发现是否在不同性别、年龄组和语言之间保持一致。
{"title":"The Multimodal Trust Effects of Face, Voice, and Sentence Content","authors":"Isar Syed, M. Baart, Jean Vroomen","doi":"10.1163/22134808-bja10119","DOIUrl":"https://doi.org/10.1163/22134808-bja10119","url":null,"abstract":"\u0000Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140747513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes 解决动作电子游戏游戏体验与自然多感官场景中视觉搜索之间的关联问题
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-02-13 DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.
之前对日常玩动作电子游戏的影响进行的研究表明,玩动作电子游戏可以改善各种认知过程,包括改善注意力任务。然而,几乎没有证据表明,玩动作电子游戏对认知的益处可以从简化的单感官刺激推广到多感官场景--这是自然、日常生活环境的基本特征。本研究探讨了视频游戏经验是否会影响在此类多感官场景中搜索时的跨模态一致性效应。我们比较了动作视频游戏玩家(AVGPs)和非视频游戏玩家(NVGPs)在视觉搜索任务中对现实场景视频剪辑中嵌入的物体的表现。我们用性别均衡的样本进行了两次相同的在线实验,共收集了 2,000 个样本。 总体而言,数据重复了之前的研究结果,即与中性或不协调的听觉事件相比,当视觉目标伴有语义一致的听觉事件时,搜索效果更佳。然而,根据研究结果,在整个搜索任务中,AVGPs 并未持续优于 NVGPs,他们使用多感官线索的效率也未优于 NVGPs。以自我报告的性别为变量进行的探索性分析表明,经验丰富的男性和女性 AVGPs 在处理跨模态线索时的反应策略可能存在差异。这些研究结果表明,在将 AVG 经验的优势推广到现实的跨模态情境中时,应谨慎行事并考虑与性别相关的问题。
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139780110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes 解决动作电子游戏游戏体验与自然多感官场景中视觉搜索之间的关联问题
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2024-02-13 DOI: 10.1163/22134808-bja10118
M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco
Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.
之前对日常玩动作电子游戏的影响进行的研究表明,玩动作电子游戏可以改善各种认知过程,包括改善注意力任务。然而,几乎没有证据表明,玩动作电子游戏对认知的益处可以从简化的单感官刺激推广到多感官场景--这是自然、日常生活环境的基本特征。本研究探讨了视频游戏经验是否会影响在此类多感官场景中搜索时的跨模态一致性效应。我们比较了动作视频游戏玩家(AVGPs)和非视频游戏玩家(NVGPs)在视觉搜索任务中对现实场景视频剪辑中嵌入的物体的表现。我们用性别均衡的样本进行了两次相同的在线实验,共收集了 2,000 个样本。 总体而言,数据重复了之前的研究结果,即与中性或不协调的听觉事件相比,当视觉目标伴有语义一致的听觉事件时,搜索效果更佳。然而,根据研究结果,在整个搜索任务中,AVGPs 并未持续优于 NVGPs,他们使用多感官线索的效率也未优于 NVGPs。以自我报告的性别为变量进行的探索性分析表明,经验丰富的男性和女性 AVGPs 在处理跨模态线索时的反应策略可能存在差异。这些研究结果表明,在将 AVG 经验的优势推广到现实的跨模态情境中时,应谨慎行事并考虑与性别相关的问题。
{"title":"Addressing the Association Between Action Video Game Playing Experience and Visual Search in Naturalistic Multisensory Scenes","authors":"M. Hamzeloo, Daria Kvasova, Salvador Soto-Faraco","doi":"10.1163/22134808-bja10118","DOIUrl":"https://doi.org/10.1163/22134808-bja10118","url":null,"abstract":"\u0000Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes — a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of . Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139840141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Sensory References for Vestibular Self-Motion Perception. 前庭自我运动感知的空间感觉参考。
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2023-12-20 DOI: 10.1163/22134808-bja10117
Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè

While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.

在周围环境中导航时,我们不断依靠惯性前庭信号以及来自环境的视觉和听觉空间参考来进行自我运动。然而,惯性线索与环境空间参考之间的相互作用尚未完全明了。在此,我们研究了前庭自我运动灵敏度是否受感官空间参考的影响。我们对健康的参与者进行了前庭自我运动检测任务,要求他们检测由低强度 Galvanic Vestibular Stimulation 引起的前庭自我运动感觉。受试者在有或没有外部视觉或听觉空间参照物的情况下完成这项检测任务。我们计算了 d prime ( d '),以此来衡量参与者的前庭敏感度,并计算了标准值,以此来衡量参与者的反应偏差。结果显示,视觉空间参照物提高了检测前庭自我运动的灵敏度。相反,听觉空间参照物并不影响自我运动灵敏度。视觉和听觉空间参照物都不会引起反应偏差的变化。环境视觉空间参照物提供了相关信息,提高了我们感知惯性自我运动线索的能力,这表明视觉和前庭系统在自我运动感知中存在特定的相互作用。
{"title":"Spatial Sensory References for Vestibular Self-Motion Perception.","authors":"Silvia Zanchi, Luigi F Cuturi, Giulio Sandini, Monica Gori, Elisa R Ferrè","doi":"10.1163/22134808-bja10117","DOIUrl":"10.1163/22134808-bja10117","url":null,"abstract":"<p><p>While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138832890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Contributions to Episodic Memory for Voices. 对声音外显记忆的跨模态贡献
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2023-12-20 DOI: 10.1163/22134808-bja10116
Joshua R Tatz, Zehra F Peynircioğlu

Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.

多感官情境通常有助于感知和记忆。事实上,在多感官情境中对项目进行编码,甚至在严格的单感官测试中(即缺乏多感官情境时)也能提高记忆效果。之前持续发现这些多感官促进效应的研究大多采用了多感官情境,在这种情境中,刺激物与记忆的目标项目有意义上的关联(例如,将典型的声音和图像配对)。其他研究则使用无关刺激作为多感官情境。第三种可能的多感官情境是与环境相关的情境,原因很简单,因为这些刺激物在现实世界中经常一起出现。我们预测,遇到这种多感官情境时,也会通过跨模态联想或与之前对此类刺激的多感官体验相关的表征来增强记忆。在两项记忆实验中,我们使用了陌生人物的脸部和声音作为日常刺激物,这些刺激物对个人的感知特征具有丰富的整合经验。我们将参与者分配到人脸识别组或声音识别组,并确保在研究阶段,一半的人脸或声音目标也会遇到另一种模式的信息。最初与人脸一起编码的声音始终记得更牢,这证明跨模态关联可以解释所观察到的多感官促进作用。
{"title":"Cross-Modal Contributions to Episodic Memory for Voices.","authors":"Joshua R Tatz, Zehra F Peynircioğlu","doi":"10.1163/22134808-bja10116","DOIUrl":"10.1163/22134808-bja10116","url":null,"abstract":"<p><p>Multisensory context often facilitates perception and memory. In fact, encoding items within a multisensory context can improve memory even on strictly unisensory tests (i.e., when the multisensory context is absent). Prior studies that have consistently found these multisensory facilitation effects have largely employed multisensory contexts in which the stimuli were meaningfully related to the items targeting for remembering (e.g., pairing canonical sounds and images). Other studies have used unrelated stimuli as multisensory context. A third possible type of multisensory context is one that is environmentally related simply because the stimuli are often encountered together in the real world. We predicted that encountering such a multisensory context would also enhance memory through cross-modal associations, or representations relating to one's prior multisensory experience with that sort of stimuli in general. In two memory experiments, we used faces and voices of unfamiliar people as everyday stimuli individuals have substantial experience integrating the perceptual features of. We assigned participants to face- or voice-recognition groups and ensured that, during the study phase, half of the face or voice targets were encountered also with information in the other modality. Voices initially encoded along with faces were consistently remembered better, providing evidence that cross-modal associations could explain the observed multisensory facilitation.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals. 静止的触觉刺激在大多数个体中不会产生眼部调节。
IF 1.6 4区 心理学 Q1 Medicine Pub Date : 2023-11-28 DOI: 10.1163/22134808-bja10115
Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld

This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.

本研究旨在确定触觉刺激对视觉调节的影响程度,无论是单独的还是与视觉结合的。在15名年轻人阅读包含盲文字母的固定目标时,客观地测量了他们的适应能力。这些卡片在20-50厘米的范围内以四种距离呈现。在触摸条件下,参与者在黑暗的房间里用惯用手触摸阅读。之后,他们用非惯用手估算出出牌的距离。在视觉条件下,他们在一个有灯光的房间里用双眼阅读,不需要触摸。在触觉与视觉的条件下,他们在一个有灯光的房间里用双眼和触觉阅读。感觉模态对调节刺激反应函数的斜率有显著的总体影响。尽管触觉深度感知是准确的,但触觉条件下的斜率与零没有显著差异。然而,一个非典型参与者在Touch条件下有中等调节斜率。触觉条件下的调节斜率明显低于视觉条件下的调节斜率。视觉条件和触觉条件下的调节斜率无显著差异。对于大多数人来说,对静止物体的触觉刺激,单独或与视觉结合,都不会影响调节反应。这些触觉刺激提供了准确的距离感知,从而质疑了Heath的近端调节模型由感知距离驱动的总体有效性。相反,近端诱导调节依赖于视觉刺激而不是触觉刺激。
{"title":"Stationary Haptic Stimuli Do not Produce Ocular Accommodation in Most Individuals.","authors":"Lawrence R Stark, Kim Shiraishi, Tyler Sommerfeld","doi":"10.1163/22134808-bja10115","DOIUrl":"10.1163/22134808-bja10115","url":null,"abstract":"<p><p>This study aimed to determine the extent to which haptic stimuli can influence ocular accommodation, either alone or in combination with vision. Accommodation was measured objectively in 15 young adults as they read stationary targets containing Braille letters. These cards were presented at four distances in the range 20-50 cm. In the Touch condition, the participant read by touch with their dominant hand in a dark room. Afterward, they estimated card distance with their non-dominant hand. In the Vision condition, they read by sight binocularly without touch in a lighted room. In the Touch with Vision condition, they read by sight binocularly and with touch in a lighted room. Sensory modality had a significant overall effect on the slope of the accommodative stimulus-response function. The slope in the Touch condition was not significantly different from zero, even though depth perception from touch was accurate. Nevertheless, one atypical participant had a moderate accommodative slope in the Touch condition. The accommodative slope in the Touch condition was significantly poorer than in the Vision condition. The accommodative slopes in the Vision condition and Touch with Vision condition were not significantly different. For most individuals, haptic stimuli for stationary objects do not influence the accommodation response, alone or in combination with vision. These haptic stimuli provide accurate distance perception, thus questioning the general validity of Heath's model of proximal accommodation as driven by perceived distance. Instead, proximally induced accommodation relies on visual rather than touch stimuli.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138453050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research. 对跨模态对应的思考:当前认识和未来研究的问题。
IF 1.8 4区 心理学 Q3 BIOPHYSICS Pub Date : 2023-11-10 DOI: 10.1163/22134808-bja10114
Kosuke Motoki, Lawrence E Marks, Carlos Velasco

The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.

在过去的二十年里,对跨模式通信的研究呈爆炸式增长。广义地说,这个术语已经被用来涵盖跨感官的特征、维度或属性之间的关联。来自多个领域(心理学、神经科学、音乐、绘画、环境设计等)的研究人员对这个话题的兴趣越来越大,更重要的是,这个话题的范围越来越广。在这里,这篇叙述性回顾旨在反思什么是跨模态对应,它们来自哪里,以及它们的基础是什么。我们认为,跨模态对应被认为是不同实际或想象的感官刺激之间的相对关联,其中许多对应是大多数人共有的。有四种主要关联(生理的、语义的、统计的和情感的)对应的分类特征是跨模态对应。感觉维度(数量/质量)和感觉特征(较低的知觉/较高的认知)在跨模态对应中对应。跨模态对应可以从两个互补的角度来理解(或测量):现象观(主观匹配的感知经验)和行为反应观(对多种感官刺激的行为反应的可观察模式)。重要的是,我们反思了需要解决的剩余问题和长期问题,以便为跨模式通信建立一个解释性框架。未来的研究需要(a)更好地理解什么时候(以及为什么)现象和行为测量是巧合的,什么时候不是巧合的,并且,理想情况下,(b)确定不同类型的跨模态对应(数量/质量,较低的感知/较高的认知)是否依赖于相同或不同的机制。
{"title":"Reflections on Cross-Modal Correspondences: Current Understanding and Issues for Future Research.","authors":"Kosuke Motoki, Lawrence E Marks, Carlos Velasco","doi":"10.1163/22134808-bja10114","DOIUrl":"10.1163/22134808-bja10114","url":null,"abstract":"<p><p>The past two decades have seen an explosion of research on cross-modal correspondences. Broadly speaking, this term has been used to encompass associations between and among features, dimensions, or attributes across the senses. There has been an increasing interest in this topic amongst researchers from multiple fields (psychology, neuroscience, music, art, environmental design, etc.) and, importantly, an increasing breadth of the topic's scope. Here, this narrative review aims to reflect on what cross-modal correspondences are, where they come from, and what underlies them. We suggest that cross-modal correspondences are usefully conceived as relative associations between different actual or imagined sensory stimuli, many of these correspondences being shared by most people. A taxonomy of correspondences with four major kinds of associations (physiological, semantic, statistical, and affective) characterizes cross-modal correspondences. Sensory dimensions (quantity/quality) and sensory features (lower perceptual/higher cognitive) correspond in cross-modal correspondences. Cross-modal correspondences may be understood (or measured) from two complementary perspectives: the phenomenal view (perceptual experiences of subjective matching) and the behavioural response view (observable patterns of behavioural response to multiple sensory stimuli). Importantly, we reflect on remaining questions and standing issues that need to be addressed in order to develop an explanatory framework for cross-modal correspondences. Future research needs (a) to understand better when (and why) phenomenal and behavioural measures are coincidental and when they are not, and, ideally, (b) to determine whether different kinds of cross-modal correspondence (quantity/quality, lower perceptual/higher cognitive) rely on the same or different mechanisms.</p>","PeriodicalId":51298,"journal":{"name":"Multisensory Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"107592772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multisensory Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1