首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Synaesthesia and the SNARC effect 联觉和SNARC效应
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648477
Clare N. Jonas
In number-form synaesthesia, numbers become explicitly mapped onto portions of space in the mind’s eye or around the body. However, non-synaesthetes are also known to map number onto space, though in an implicit way. For example, those who are literate in a language that is written in a left-to-right direction are likely to assign small numbers to the left side of space and large numbers to the right side of space (e.g., Dehaene et al., 1993). In non-synaesthetes, this mapping is flexible (e.g., numbers map onto a circular form if the participant is primed to do so by the appearance of a clock-face), which has been interpreted as a response to task demands (e.g., Bachtold et al., 1998) or as evidence of a linguistically-mediated, rather than a direct, link between number and space (e.g., Proctor and Cho, 2006). We investigated whether synaesthetes’ number forms show the same flexibility during an odd-or-even judgement task that tapped linguistic associations between number and space (following Gevers et al., 2010). Synaesthetes and non-synaesthetes alike mapped small numbers to the verbal label ‘left’ and large numbers to the verbal label ‘right’. This surprising result may indicate that synaesthetes’ number forms are also the result of a linguistic link between number and space, instead of a direct link between the two, or that performance on tasks such as these is not mediated by the number form.
在数字形式联觉中,数字被明确地映射到心灵之眼或身体周围的空间部分。然而,非联觉者也知道将数字映射到空间,尽管是以一种隐含的方式。例如,那些使用从左到右书写的语言的人可能会将小数字分配到空格的左侧,而将大数字分配到空格的右侧(例如,Dehaene et al., 1993)。在非联觉者中,这种映射是灵活的(例如,如果参与者被时钟表面的外观所引导,数字映射到圆形),这被解释为对任务要求的反应(例如,Bachtold等人,1998)或作为语言介导的证据,而不是数字和空间之间的直接联系(例如,Proctor和Cho, 2006)。我们研究了联觉者的数字形式在奇数或偶数判断任务中是否表现出同样的灵活性,该任务利用了数字和空间之间的语言联系(遵循Gevers等人,2010)。联觉者和非联觉者都将小的数字映射到单词标签“左”,将大的数字映射到单词标签“右”。这个令人惊讶的结果可能表明,联觉者的数字形式也是数字和空间之间的语言联系的结果,而不是两者之间的直接联系,或者在这些任务中的表现并不受数字形式的调节。
{"title":"Synaesthesia and the SNARC effect","authors":"Clare N. Jonas","doi":"10.1163/187847612X648477","DOIUrl":"https://doi.org/10.1163/187847612X648477","url":null,"abstract":"In number-form synaesthesia, numbers become explicitly mapped onto portions of space in the mind’s eye or around the body. However, non-synaesthetes are also known to map number onto space, though in an implicit way. For example, those who are literate in a language that is written in a left-to-right direction are likely to assign small numbers to the left side of space and large numbers to the right side of space (e.g., Dehaene et al., 1993). In non-synaesthetes, this mapping is flexible (e.g., numbers map onto a circular form if the participant is primed to do so by the appearance of a clock-face), which has been interpreted as a response to task demands (e.g., Bachtold et al., 1998) or as evidence of a linguistically-mediated, rather than a direct, link between number and space (e.g., Proctor and Cho, 2006). We investigated whether synaesthetes’ number forms show the same flexibility during an odd-or-even judgement task that tapped linguistic associations between number and space (following Gevers et al., 2010). Synaesthetes and non-synaesthetes alike mapped small numbers to the verbal label ‘left’ and large numbers to the verbal label ‘right’. This surprising result may indicate that synaesthetes’ number forms are also the result of a linguistic link between number and space, instead of a direct link between the two, or that performance on tasks such as these is not mediated by the number form.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"221-221"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of visual shapes by flexible and simple rules. 用灵活和简单的规则概括视觉形状。
Pub Date : 2012-01-01 Epub Date: 2011-07-19 DOI: 10.1163/187847511X571519
Bart Ons, Johan Wagemans

Rules and similarity are at the heart of our understanding of human categorization. However, it is difficult to distinguish their role as both determinants of categorization are confounded in many real situations. Rules are based on a number of identical properties between objects but these correspondences also make objects appearing more similar. Here, we introduced a stimulus set where rules and similarity were unconfounded and we let participants generalize category examples towards new instances. We also introduced a method based on the frequency distribution of the formed partitions in the stimulus sets, which allowed us to verify the role of rules and similarity in categorization. Our evaluation favoured the rule-based account. The most preferred rules were the simplest ones and they consisted of recurrent visual properties (regularities) in the stimulus set. Additionally, we created different variants of the same stimulus set and tested the moderating influence of small changes in appearance of the stimulus material. A conceptual manipulation (Experiment 1) had no influence but all visual manipulations (Experiment 2 and 3) had strong influences in participants' reliance on particular rules, indicating that prior beliefs of category defining rules are rather flexible.

规则和相似性是我们理解人类分类的核心。然而,很难区分它们的作用,因为在许多实际情况下,分类的两个决定因素是混淆的。规则基于对象之间的许多相同属性,但这些对应也使对象看起来更加相似。在这里,我们引入了一个刺激集,其中规则和相似性是不混淆的,我们让参与者将类别示例概括为新的实例。我们还介绍了一种基于刺激集中形成分区的频率分布的方法,这使我们能够验证规则和相似性在分类中的作用。我们的评估倾向于基于规则的账户。最受欢迎的规则是最简单的规则,它们由刺激集中循环的视觉特性(规则)组成。此外,我们创建了同一刺激集的不同变体,并测试了刺激材料外观微小变化的调节作用。概念操作(实验1)对被试对特定规则的依赖没有影响,而所有视觉操作(实验2和3)对被试对特定规则的依赖有较强的影响,表明对类别定义规则的先验信念具有相当的灵活性。
{"title":"Generalization of visual shapes by flexible and simple rules.","authors":"Bart Ons,&nbsp;Johan Wagemans","doi":"10.1163/187847511X571519","DOIUrl":"https://doi.org/10.1163/187847511X571519","url":null,"abstract":"<p><p>Rules and similarity are at the heart of our understanding of human categorization. However, it is difficult to distinguish their role as both determinants of categorization are confounded in many real situations. Rules are based on a number of identical properties between objects but these correspondences also make objects appearing more similar. Here, we introduced a stimulus set where rules and similarity were unconfounded and we let participants generalize category examples towards new instances. We also introduced a method based on the frequency distribution of the formed partitions in the stimulus sets, which allowed us to verify the role of rules and similarity in categorization. Our evaluation favoured the rule-based account. The most preferred rules were the simplest ones and they consisted of recurrent visual properties (regularities) in the stimulus set. Additionally, we created different variants of the same stimulus set and tested the moderating influence of small changes in appearance of the stimulus material. A conceptual manipulation (Experiment 1) had no influence but all visual manipulations (Experiment 2 and 3) had strong influences in participants' reliance on particular rules, indicating that prior beliefs of category defining rules are rather flexible.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 3-4","pages":"237-61"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847511X571519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30016545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Electrophysiological correlates of tactile and visual perception during goal-directed movement 目标导向运动中触觉和视觉感知的电生理关联
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648008
G. Juravle, T. Heed, C. Spence, B. Roeder
Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.
到达我们感觉感受器的触觉信息在目标导向运动的不同时间阶段被不同地处理。我们利用事件相关电位(ERPs)研究了运动过程中触觉信息加工的神经元关联。参与者对摆在他们面前桌子中央的一个物体进行了目标导向的伸手。触觉和视觉刺激分别在运动的不同阶段(即准备、执行和运动后)进行。这些刺激被独立地传递给运动的手或静止的手。在控制条件下,参与者只进行运动,而遗漏(仅运动)erp被记录下来。参与者被告知忽略任何感官事件的存在或不存在,只专注于动作的执行。结果显示,触觉刺激后80 ~ 200 ms和视觉刺激后100 ~ 250 ms之间的erp增强。这些调节在目标导向运动的执行阶段最为显著,它们是基于效应的(即,在移动的手上呈现的刺激明显更负性),并且与模态无关(即,在触觉和视觉刺激下观察到类似的ERP增强)。在运动执行阶段对感觉信息的强化处理表明,传入的感觉信息可能用于对当前运动计划的潜在调整。此外,这些结果表明注意机制和感觉运动系统之间存在密切的相互作用。
{"title":"Electrophysiological correlates of tactile and visual perception during goal-directed movement","authors":"G. Juravle, T. Heed, C. Spence, B. Roeder","doi":"10.1163/187847612X648008","DOIUrl":"https://doi.org/10.1163/187847612X648008","url":null,"abstract":"Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"170-170"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal disparity effects on audiovisual integration in low vision individuals 时间视差对低视力个体视听整合的影响
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648044
Stefano Targher, Valeria Occelli, M. Zampini
Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.
我们最近的研究结果表明,当同时呈现视听对时,声音可以改善低视力个体的视觉检测。本研究旨在调查我们之前报道的视听增强效应的可能的时间方面。低视力的参与者被要求在不同的soa中检测单独呈现或与听觉刺激一起呈现的视觉刺激(是/否任务)的存在。在第一个实验中,当声音始终引导视觉刺激时,即使视觉刺激延迟400 ms,视觉检测也有显著的增强。然而,在第二个实验中,当声音随机领先或滞后于视觉刺激时,视觉检测的提高程度有所降低。只有当视听刺激同步时,才能发现显著的增强。综上所述,目前的研究结果似乎表明,不同模式之间的高度关联可能会调节低视力个体的视听互动。
{"title":"Temporal disparity effects on audiovisual integration in low vision individuals","authors":"Stefano Targher, Valeria Occelli, M. Zampini","doi":"10.1163/187847612X648044","DOIUrl":"https://doi.org/10.1163/187847612X648044","url":null,"abstract":"Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"175-175"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictable variations in auditory pitch modulate the spatial processing of visual stimuli: An ERP study 听觉音调的可预测变化调节视觉刺激的空间加工:一项ERP研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646488
Fátima Vera-Constán, Irune Fernández-Prieto, Joel García-Morera, J. Navarra
We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants performed a color discrimination task of a visual target that could appear either above or below a centrally-presented fixation point. Each experimental trial started with an auditory isochronous stream of 11 tones including a high- and a low-pitched tone. The visual target appeared isochronously after the last tone. In the ‘non-predictive’ condition, the tones were presented in an erratic fashion (e.g., ‘high-low-low-high-high-low-high …’). In the ‘predictive condition’, the melodic combination of high- and low-pitched tones was highly predictable (e.g., ‘low-high-low-high-low …’). Within the predictive condition, the visual stimuli appeared congruently or incongruently with respect to the melody (‘… low-high-low-high-low-UP’ or ‘… low-high-low-high-low-DOWN’, respectively). Participants showed faster responses when the visual target appeared after a predictive melody. Electrophysiologically, early (25–150 ms) amplitude effects of predictability were observed in frontal and parietal regions, spreading to central regions (N1) afterwards. Predictability effects were also found in the P2–N2 complex and the P3 in central and parietal regions. Significant auditory-to-visual congruency effects were also observed in the parieto-occipital P3 component. Our findings reveal the existence of crossmodal effects of perceiving auditory isochronous melodies on visual temporal orienting. More importantly, our results suggest that pitch information can be transformed into a spatial code that shapes the spatial processing in other modalities such as vision.
我们研究了能否通过声音的“空间重新编码”来感知可预测的音高“起伏”(就像在音乐旋律中听到的那样),从而影响视觉刺激的空间处理(见Foster和Zatorre, 2010;Rusconi et al., 2006)。当参与者对一个可能出现在中央注视点上方或下方的视觉目标进行颜色辨别任务时,记录了事件相关电位(ERPs)。每个实验都以11个音调的听觉同步流开始,包括一个高音和一个低音。在最后一个音调之后,视觉目标同步出现。在“非预测性”条件下,音调以不稳定的方式呈现(例如,“高-低-低-高-高-低-高……”)。在“预测条件”中,高音和低音的旋律组合是高度可预测的(例如,“低-高-低-高-低……”)。在预测条件下,视觉刺激与旋律一致或不一致(分别是“…低-高-低-高-低-上”或“…低-高-低-低-下”)。当视觉目标在预测旋律后出现时,参与者表现出更快的反应。电生理学上,可预测性的早期(25-150 ms)振幅效应在额叶和顶叶区域被观察到,随后扩散到中央区域(N1)。在P2-N2复合体以及中部和顶叶区的P3中也发现了可预测性效应。在顶枕P3部分也观察到显著的听觉视觉一致性效应。我们的研究结果揭示了听觉等时性旋律感知对视觉时间定向存在跨模效应。更重要的是,我们的研究结果表明,音高信息可以转化为空间编码,从而影响视觉等其他方式的空间处理。
{"title":"Predictable variations in auditory pitch modulate the spatial processing of visual stimuli: An ERP study","authors":"Fátima Vera-Constán, Irune Fernández-Prieto, Joel García-Morera, J. Navarra","doi":"10.1163/187847612X646488","DOIUrl":"https://doi.org/10.1163/187847612X646488","url":null,"abstract":"We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants performed a color discrimination task of a visual target that could appear either above or below a centrally-presented fixation point. Each experimental trial started with an auditory isochronous stream of 11 tones including a high- and a low-pitched tone. The visual target appeared isochronously after the last tone. In the ‘non-predictive’ condition, the tones were presented in an erratic fashion (e.g., ‘high-low-low-high-high-low-high …’). In the ‘predictive condition’, the melodic combination of high- and low-pitched tones was highly predictable (e.g., ‘low-high-low-high-low …’). Within the predictive condition, the visual stimuli appeared congruently or incongruently with respect to the melody (‘… low-high-low-high-low-UP’ or ‘… low-high-low-high-low-DOWN’, respectively). Participants showed faster responses when the visual target appeared after a predictive melody. Electrophysiologically, early (25–150 ms) amplitude effects of predictability were observed in frontal and parietal regions, spreading to central regions (N1) afterwards. Predictability effects were also found in the P2–N2 complex and the P3 in central and parietal regions. Significant auditory-to-visual congruency effects were also observed in the parieto-occipital P3 component. Our findings reveal the existence of crossmodal effects of perceiving auditory isochronous melodies on visual temporal orienting. More importantly, our results suggest that pitch information can be transformed into a spatial code that shapes the spatial processing in other modalities such as vision.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"36 1","pages":"25-25"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646488","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception 在视听时间感知上评估辅音和元音发音的视听显著性和视觉信息内容
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646514
A. Vatakis, C. Spence
Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).
研究表明,不同语音符号之间和内部的时间整合窗口是不同的。迄今为止,有限的语音标记测试还不能正确地评估这些差异是由任务还是刺激驱动的?我们进行了一系列的实验来研究与语音发音相关的身体差异如何影响视听语音感知的时间方面。介绍了三位演讲者发出的辅音和元音的视频。参与者对哪个语音流先出现进行时间顺序判断(TOJs)。研究分析了参与者的toj和主观同时性点(PSS)的敏感性与辅音的位置、发音方式和发声方式,以及元音的舌头高度/后部和唇圆度的关系。结果表明,在发音位置/圆度的情况下,参与者对PSS上具有较小视觉导联的高度突出语音信号的时间顺序更为敏感。当评估发音方式/高度时,情况并非如此。这些发现表明,视觉语音信号为听觉信号提供了大量线索,听觉信号调节了感知语音流所需的相对处理时间。随后的实验探讨了不同视觉信息来源的呈现如何调节这些发现。三个辅音的视频在自然和点光(PL)观看条件下呈现,显示部分或整个面部。初步分析显示,在不同的观看条件下,TOJ精度没有差异。然而,PSS数据显示,根据发出的语音标记,观看条件存在显着差异(例如,唇部/牙齿/舌头的视觉导联较大)。
{"title":"Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception","authors":"A. Vatakis, C. Spence","doi":"10.1163/187847612X646514","DOIUrl":"https://doi.org/10.1163/187847612X646514","url":null,"abstract":"Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"29-29"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646514","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Somatosensory amplification and illusory tactile sensations 体感放大和虚幻触觉
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646569
Vrushant Lakhlani, Kirsten J. McKenzie
Experimental studies have demonstrated that it is possible to induce convincing bodily distortions in neurologically healthy individuals, through cross-modal manipulations; such as the rubber hand illusion (Botvinick and Cohen, 1998), the parchment skin illusion (Jousmaki and Hari, 1998) and the Somatic Signal Detection Task (SSDT; Lloyd et al., 2008). It has been shown previously with the SSDT that when a tactile stimulus is presented with a simultaneous light flash, individuals show both increased sensitivity to the tactile stimulus, and the tendency to report feeling the stimulus even when one was not presented; a tendency which varies greatly between individuals but remains constant over time within an individual (McKenzie et al., 2010). Further studies into tactile stimulus discrimination using the Somatic Signal Discrimination Task (SSDiT) have also shown that a concurrent light led to a significant improvement in people’s ability to discriminate ‘weak’ tactile stimuli from ‘strong’ ones, as well as a bias towards reporting any tactile stimulus as ‘strong’ (Poliakoff et al., in preparation), indicating that the light may influence both early and later stages of processing. The current study investigated whether the tendency to report higher numbers of false alarms when carrying out the SSDT is correlated with the tendency to experience higher numbers of cross-modal ‘enhancements’ of weak tactile signals (leading to classifications of ‘weak’ stimuli as strong, and ‘strong’ stimuli as ‘stronger’). Results will be discussed.
实验研究表明,通过跨模态操作,可以在神经健康的个体中诱导令人信服的身体扭曲;如橡胶手错觉(Botvinick and Cohen, 1998)、羊皮纸皮肤错觉(Jousmaki and Hari, 1998)和躯体信号检测任务(SSDT;Lloyd et al., 2008)。先前的SSDT已经表明,当触觉刺激与同时出现的闪光同时出现时,个体对触觉刺激的敏感性增加,并且即使没有出现刺激,也倾向于报告感觉到刺激;这种趋势在个体之间差异很大,但在个体内部随时间保持不变(McKenzie et al., 2010)。使用躯体信号辨别任务(SSDiT)对触觉刺激辨别的进一步研究也表明,同时的光导致人们区分“弱”触觉刺激和“强”触觉刺激的能力显著提高,以及倾向于将任何触觉刺激报告为“强”(Poliakoff等人,in preparation),这表明光可能影响处理的早期和后期阶段。目前的研究调查了在进行SSDT时报告更多假警报的倾向是否与经历更多弱触觉信号的跨模态“增强”的倾向相关(导致将“弱”刺激分类为强,将“强”刺激分类为“强”)。结果将被讨论。
{"title":"Somatosensory amplification and illusory tactile sensations","authors":"Vrushant Lakhlani, Kirsten J. McKenzie","doi":"10.1163/187847612X646569","DOIUrl":"https://doi.org/10.1163/187847612X646569","url":null,"abstract":"Experimental studies have demonstrated that it is possible to induce convincing bodily distortions in neurologically healthy individuals, through cross-modal manipulations; such as the rubber hand illusion (Botvinick and Cohen, 1998), the parchment skin illusion (Jousmaki and Hari, 1998) and the Somatic Signal Detection Task (SSDT; Lloyd et al., 2008). It has been shown previously with the SSDT that when a tactile stimulus is presented with a simultaneous light flash, individuals show both increased sensitivity to the tactile stimulus, and the tendency to report feeling the stimulus even when one was not presented; a tendency which varies greatly between individuals but remains constant over time within an individual (McKenzie et al., 2010). Further studies into tactile stimulus discrimination using the Somatic Signal Discrimination Task (SSDiT) have also shown that a concurrent light led to a significant improvement in people’s ability to discriminate ‘weak’ tactile stimuli from ‘strong’ ones, as well as a bias towards reporting any tactile stimulus as ‘strong’ (Poliakoff et al., in preparation), indicating that the light may influence both early and later stages of processing. The current study investigated whether the tendency to report higher numbers of false alarms when carrying out the SSDT is correlated with the tendency to experience higher numbers of cross-modal ‘enhancements’ of weak tactile signals (leading to classifications of ‘weak’ stimuli as strong, and ‘strong’ stimuli as ‘stronger’). Results will be discussed.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"34-34"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Updating expectencies about audiovisual associations in speech 语音中视听关联期望的更新
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647946
Tim Paris, Jeesun Kim, C. Davis
The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.
多感觉信息的处理依赖于感觉线索之间的习得关联。在说话的情况下,嘴唇的动作和随后发出的声音之间有一种后天习得的联系。也就是说,特定的嘴唇和嘴部运动可靠地发出特定的声音。脑电图和脑磁图研究已经调查了这种“一致的”AV关联和其他“不一致的”关联之间的差异,通常报告了声音发作后350毫秒的ERP差异。使用256个有源电极脑电图系统,我们测试了这种“一致性效应”是否会在大多数试验具有改变的视听关联(听觉言语与不匹配的视觉嘴唇运动配对)的情况下减少。参与者在两个阶段被呈现刺激:在一个阶段只有15%是不一致的试验;在另一个阶段,85%是不一致的试验。我们发现了一致性效应,在350 ms和500 ms之间显示了一致性和不一致性语音之间的ERP差异。重要的是,在大多数不一致的试验中,这种影响被降低了。这种一致性效应的减弱表明,AV语音的处理方式取决于它所处的语境。此外,这一结果表明,接触新的感官关系会导致对听觉和视觉语言线索之间关系的更新预期。
{"title":"Updating expectencies about audiovisual associations in speech","authors":"Tim Paris, Jeesun Kim, C. Davis","doi":"10.1163/187847612X647946","DOIUrl":"https://doi.org/10.1163/187847612X647946","url":null,"abstract":"The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"120 1","pages":"164-164"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647946","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Features of the human rod bipolar cell ERG response during fusion of scotopic flicker. 暗闪烁融合过程中人杆状双极细胞ERG反应的特征。
Pub Date : 2012-01-01 DOI: 10.1163/187847612x648792
Allison M Cameron, Jacqueline S C Lam

The ability of the eye to distinguish between intermittently presented flash stimuli is a measure of the temporal resolution of vision. The aim of this study was to examine the relationship between the features of the human rod bipolar cell response (as measured from the scotopic ERG b-wave) and the psychophysically measured critical fusion frequency (CFF). Stimuli consisted of dim (-0.04 Td x s), blue flashes presented either singly, or as flash pairs (at a range of time separations, between 5 and 300 ms). Single flashes of double intensity (-0.08 Td x s) were also presented as a reference. Visual responses to flash pairs were measured via (1) recording of the ERG b-wave, and (2) threshold determinations of the CFF using a two-alternative forced-choice method (flicker vs. fused illumination). The results of this experiment suggest that b-wave responses to flash pairs separated by < 100 ms are electrophysiologically similar to those obtained with single flashes of double intensity. Psychophysically, the percepts of flash pairs < 100 ms apart appeared fused. In conclusion, the visual system's ability to discriminate between scotopic stimuli may be determined by the response characteristics of the rod bipolar cell, or perhaps by the rod photoreceptor itself.

眼睛区分间歇性呈现的闪光刺激的能力是衡量视觉时间分辨率的一种方法。本研究的目的是研究人类杆状双极细胞反应的特征(通过暗位ERG b波测量)与心理物理测量的临界融合频率(CFF)之间的关系。刺激包括微弱的(-0.04 Td x s),蓝色闪光可以单独出现,也可以作为闪光对出现(在5到300 ms的时间间隔范围内)。双强度(-0.08 Td x s)的单次闪光也作为参考。通过(1)记录ERG b波来测量对闪光对的视觉反应,(2)使用两种选择的强迫选择方法(闪烁与融合照明)确定CFF的阈值。实验结果表明,间隔< 100 ms的闪烁对的b波响应在电生理学上与单次双强度闪烁的b波响应相似。在心理物理上,间隔< 100 ms的闪光对的感知出现融合。总之,视觉系统区分暗位刺激的能力可能是由杆状双极细胞的反应特性决定的,也可能是由杆状光感受器本身决定的。
{"title":"Features of the human rod bipolar cell ERG response during fusion of scotopic flicker.","authors":"Allison M Cameron,&nbsp;Jacqueline S C Lam","doi":"10.1163/187847612x648792","DOIUrl":"https://doi.org/10.1163/187847612x648792","url":null,"abstract":"<p><p>The ability of the eye to distinguish between intermittently presented flash stimuli is a measure of the temporal resolution of vision. The aim of this study was to examine the relationship between the features of the human rod bipolar cell response (as measured from the scotopic ERG b-wave) and the psychophysically measured critical fusion frequency (CFF). Stimuli consisted of dim (-0.04 Td x s), blue flashes presented either singly, or as flash pairs (at a range of time separations, between 5 and 300 ms). Single flashes of double intensity (-0.08 Td x s) were also presented as a reference. Visual responses to flash pairs were measured via (1) recording of the ERG b-wave, and (2) threshold determinations of the CFF using a two-alternative forced-choice method (flicker vs. fused illumination). The results of this experiment suggest that b-wave responses to flash pairs separated by < 100 ms are electrophysiologically similar to those obtained with single flashes of double intensity. Psychophysically, the percepts of flash pairs < 100 ms apart appeared fused. In conclusion, the visual system's ability to discriminate between scotopic stimuli may be determined by the response characteristics of the rod bipolar cell, or perhaps by the rod photoreceptor itself.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 6","pages":"545-60"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612x648792","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40138004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Combining fiber tracking and functional brain imaging for revealing brain networks involved in auditory–visual integration in humans 结合纤维追踪和功能性脑成像来揭示人类听觉-视觉整合的脑网络
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646280
A. Beer, Tina Plank, Evangelia-Regkina Symeonidou, G. Meyer, M. Greenlee
Previous functional magnetic resonance imaging (MRI) found various brain areas in the temporal and occipital lobe involved in integrating auditory and visual object information. Fiber tracking based on diffusion-weighted MRI suggested neuroanatomical connections between auditory cortex and sub-regions of the temporal and occipital lobe. However, the relationship between functional activity and white-matter tracks remained unclear. Here, we combined probabilistic tracking and functional MRI in order to reveal the structural connections related to auditory–visual object perception. Ten healthy people were examined by diffusion-weighted and functional MRI. During functional examinations they viewed either movies of lip or body movements, listened to corresponding sounds (phonological sounds or body action sounds), or a combination of both. We found that phonological sounds elicited stronger activity in the lateral superior temporal gyrus (STG) than body action sounds. Body movements elicited stronger activity in the lateral occipital cortex than lip movements. Functional activity in the phonological STG region and the lateral occipital body area were mutually modulated (sub-additive) by combined auditory–visual stimulation. Moreover, bimodal stimuli engaged a region in the posterior superior temporal sulcus (STS). Probabilistic tracking revealed white-matter tracks between the auditory cortex and sub-regions of the STS (anterior and posterior) and occipital cortex. The posterior STS region was also found to be relevant for auditory–visual object perception. The anterior STS region showed connections to the phonological STG area and to the lateral occipital body area. Our findings suggest that multisensory networks in the temporal lobe are best revealed by combining functional and structural measures.
先前的功能性磁共振成像(MRI)发现,颞叶和枕叶的不同大脑区域参与了听觉和视觉对象信息的整合。基于弥散加权MRI的纤维跟踪显示听觉皮层与颞叶和枕叶亚区之间存在神经解剖学上的联系。然而,功能活动和白质轨迹之间的关系尚不清楚。在这里,我们结合概率跟踪和功能性MRI来揭示与听觉-视觉物体感知相关的结构连接。对10名健康人进行弥散加权和功能性MRI检查。在功能测试中,他们观看嘴唇或身体运动的电影,听相应的声音(语音或身体动作的声音),或两者的结合。我们发现语音比肢体动作语音在颞上外侧回(STG)中引起更强的活动。身体运动比嘴唇运动更能激发枕骨外侧皮层的活动。在听觉-视觉联合刺激下,STG语音区和枕侧体区的功能活动相互调节(亚加性)。此外,双峰刺激涉及颞后上沟(STS)的一个区域。概率跟踪显示听觉皮层与STS亚区(前、后)和枕叶皮层之间的白质轨迹。后侧STS区域也被发现与听觉-视觉物体感知有关。前侧STG区与语音STG区和枕侧体区有连接。我们的研究结果表明,颞叶中的多感觉网络最好通过结合功能和结构测量来揭示。
{"title":"Combining fiber tracking and functional brain imaging for revealing brain networks involved in auditory–visual integration in humans","authors":"A. Beer, Tina Plank, Evangelia-Regkina Symeonidou, G. Meyer, M. Greenlee","doi":"10.1163/187847612X646280","DOIUrl":"https://doi.org/10.1163/187847612X646280","url":null,"abstract":"Previous functional magnetic resonance imaging (MRI) found various brain areas in the temporal and occipital lobe involved in integrating auditory and visual object information. Fiber tracking based on diffusion-weighted MRI suggested neuroanatomical connections between auditory cortex and sub-regions of the temporal and occipital lobe. However, the relationship between functional activity and white-matter tracks remained unclear. Here, we combined probabilistic tracking and functional MRI in order to reveal the structural connections related to auditory–visual object perception. Ten healthy people were examined by diffusion-weighted and functional MRI. During functional examinations they viewed either movies of lip or body movements, listened to corresponding sounds (phonological sounds or body action sounds), or a combination of both. We found that phonological sounds elicited stronger activity in the lateral superior temporal gyrus (STG) than body action sounds. Body movements elicited stronger activity in the lateral occipital cortex than lip movements. Functional activity in the phonological STG region and the lateral occipital body area were mutually modulated (sub-additive) by combined auditory–visual stimulation. Moreover, bimodal stimuli engaged a region in the posterior superior temporal sulcus (STS). Probabilistic tracking revealed white-matter tracks between the auditory cortex and sub-regions of the STS (anterior and posterior) and occipital cortex. The posterior STS region was also found to be relevant for auditory–visual object perception. The anterior STS region showed connections to the phonological STG area and to the lateral occipital body area. Our findings suggest that multisensory networks in the temporal lobe are best revealed by combining functional and structural measures.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"5-5"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1