首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Synaesthesia and the SNARC effect 联觉和SNARC效应
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648477
Clare N. Jonas
In number-form synaesthesia, numbers become explicitly mapped onto portions of space in the mind’s eye or around the body. However, non-synaesthetes are also known to map number onto space, though in an implicit way. For example, those who are literate in a language that is written in a left-to-right direction are likely to assign small numbers to the left side of space and large numbers to the right side of space (e.g., Dehaene et al., 1993). In non-synaesthetes, this mapping is flexible (e.g., numbers map onto a circular form if the participant is primed to do so by the appearance of a clock-face), which has been interpreted as a response to task demands (e.g., Bachtold et al., 1998) or as evidence of a linguistically-mediated, rather than a direct, link between number and space (e.g., Proctor and Cho, 2006). We investigated whether synaesthetes’ number forms show the same flexibility during an odd-or-even judgement task that tapped linguistic associations between number and space (following Gevers et al., 2010). Synaesthetes and non-synaesthetes alike mapped small numbers to the verbal label ‘left’ and large numbers to the verbal label ‘right’. This surprising result may indicate that synaesthetes’ number forms are also the result of a linguistic link between number and space, instead of a direct link between the two, or that performance on tasks such as these is not mediated by the number form.
在数字形式联觉中,数字被明确地映射到心灵之眼或身体周围的空间部分。然而,非联觉者也知道将数字映射到空间,尽管是以一种隐含的方式。例如,那些使用从左到右书写的语言的人可能会将小数字分配到空格的左侧,而将大数字分配到空格的右侧(例如,Dehaene et al., 1993)。在非联觉者中,这种映射是灵活的(例如,如果参与者被时钟表面的外观所引导,数字映射到圆形),这被解释为对任务要求的反应(例如,Bachtold等人,1998)或作为语言介导的证据,而不是数字和空间之间的直接联系(例如,Proctor和Cho, 2006)。我们研究了联觉者的数字形式在奇数或偶数判断任务中是否表现出同样的灵活性,该任务利用了数字和空间之间的语言联系(遵循Gevers等人,2010)。联觉者和非联觉者都将小的数字映射到单词标签“左”,将大的数字映射到单词标签“右”。这个令人惊讶的结果可能表明,联觉者的数字形式也是数字和空间之间的语言联系的结果,而不是两者之间的直接联系,或者在这些任务中的表现并不受数字形式的调节。
{"title":"Synaesthesia and the SNARC effect","authors":"Clare N. Jonas","doi":"10.1163/187847612X648477","DOIUrl":"https://doi.org/10.1163/187847612X648477","url":null,"abstract":"In number-form synaesthesia, numbers become explicitly mapped onto portions of space in the mind’s eye or around the body. However, non-synaesthetes are also known to map number onto space, though in an implicit way. For example, those who are literate in a language that is written in a left-to-right direction are likely to assign small numbers to the left side of space and large numbers to the right side of space (e.g., Dehaene et al., 1993). In non-synaesthetes, this mapping is flexible (e.g., numbers map onto a circular form if the participant is primed to do so by the appearance of a clock-face), which has been interpreted as a response to task demands (e.g., Bachtold et al., 1998) or as evidence of a linguistically-mediated, rather than a direct, link between number and space (e.g., Proctor and Cho, 2006). We investigated whether synaesthetes’ number forms show the same flexibility during an odd-or-even judgement task that tapped linguistic associations between number and space (following Gevers et al., 2010). Synaesthetes and non-synaesthetes alike mapped small numbers to the verbal label ‘left’ and large numbers to the verbal label ‘right’. This surprising result may indicate that synaesthetes’ number forms are also the result of a linguistic link between number and space, instead of a direct link between the two, or that performance on tasks such as these is not mediated by the number form.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"221-221"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of visual shapes by flexible and simple rules. 用灵活和简单的规则概括视觉形状。
Pub Date : 2012-01-01 Epub Date: 2011-07-19 DOI: 10.1163/187847511X571519
Bart Ons, Johan Wagemans

Rules and similarity are at the heart of our understanding of human categorization. However, it is difficult to distinguish their role as both determinants of categorization are confounded in many real situations. Rules are based on a number of identical properties between objects but these correspondences also make objects appearing more similar. Here, we introduced a stimulus set where rules and similarity were unconfounded and we let participants generalize category examples towards new instances. We also introduced a method based on the frequency distribution of the formed partitions in the stimulus sets, which allowed us to verify the role of rules and similarity in categorization. Our evaluation favoured the rule-based account. The most preferred rules were the simplest ones and they consisted of recurrent visual properties (regularities) in the stimulus set. Additionally, we created different variants of the same stimulus set and tested the moderating influence of small changes in appearance of the stimulus material. A conceptual manipulation (Experiment 1) had no influence but all visual manipulations (Experiment 2 and 3) had strong influences in participants' reliance on particular rules, indicating that prior beliefs of category defining rules are rather flexible.

规则和相似性是我们理解人类分类的核心。然而,很难区分它们的作用,因为在许多实际情况下,分类的两个决定因素是混淆的。规则基于对象之间的许多相同属性,但这些对应也使对象看起来更加相似。在这里,我们引入了一个刺激集,其中规则和相似性是不混淆的,我们让参与者将类别示例概括为新的实例。我们还介绍了一种基于刺激集中形成分区的频率分布的方法,这使我们能够验证规则和相似性在分类中的作用。我们的评估倾向于基于规则的账户。最受欢迎的规则是最简单的规则,它们由刺激集中循环的视觉特性(规则)组成。此外,我们创建了同一刺激集的不同变体,并测试了刺激材料外观微小变化的调节作用。概念操作(实验1)对被试对特定规则的依赖没有影响,而所有视觉操作(实验2和3)对被试对特定规则的依赖有较强的影响,表明对类别定义规则的先验信念具有相当的灵活性。
{"title":"Generalization of visual shapes by flexible and simple rules.","authors":"Bart Ons,&nbsp;Johan Wagemans","doi":"10.1163/187847511X571519","DOIUrl":"https://doi.org/10.1163/187847511X571519","url":null,"abstract":"<p><p>Rules and similarity are at the heart of our understanding of human categorization. However, it is difficult to distinguish their role as both determinants of categorization are confounded in many real situations. Rules are based on a number of identical properties between objects but these correspondences also make objects appearing more similar. Here, we introduced a stimulus set where rules and similarity were unconfounded and we let participants generalize category examples towards new instances. We also introduced a method based on the frequency distribution of the formed partitions in the stimulus sets, which allowed us to verify the role of rules and similarity in categorization. Our evaluation favoured the rule-based account. The most preferred rules were the simplest ones and they consisted of recurrent visual properties (regularities) in the stimulus set. Additionally, we created different variants of the same stimulus set and tested the moderating influence of small changes in appearance of the stimulus material. A conceptual manipulation (Experiment 1) had no influence but all visual manipulations (Experiment 2 and 3) had strong influences in participants' reliance on particular rules, indicating that prior beliefs of category defining rules are rather flexible.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 3-4","pages":"237-61"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847511X571519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30016545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Features of the human rod bipolar cell ERG response during fusion of scotopic flicker. 暗闪烁融合过程中人杆状双极细胞ERG反应的特征。
Pub Date : 2012-01-01 DOI: 10.1163/187847612x648792
Allison M Cameron, Jacqueline S C Lam

The ability of the eye to distinguish between intermittently presented flash stimuli is a measure of the temporal resolution of vision. The aim of this study was to examine the relationship between the features of the human rod bipolar cell response (as measured from the scotopic ERG b-wave) and the psychophysically measured critical fusion frequency (CFF). Stimuli consisted of dim (-0.04 Td x s), blue flashes presented either singly, or as flash pairs (at a range of time separations, between 5 and 300 ms). Single flashes of double intensity (-0.08 Td x s) were also presented as a reference. Visual responses to flash pairs were measured via (1) recording of the ERG b-wave, and (2) threshold determinations of the CFF using a two-alternative forced-choice method (flicker vs. fused illumination). The results of this experiment suggest that b-wave responses to flash pairs separated by < 100 ms are electrophysiologically similar to those obtained with single flashes of double intensity. Psychophysically, the percepts of flash pairs < 100 ms apart appeared fused. In conclusion, the visual system's ability to discriminate between scotopic stimuli may be determined by the response characteristics of the rod bipolar cell, or perhaps by the rod photoreceptor itself.

眼睛区分间歇性呈现的闪光刺激的能力是衡量视觉时间分辨率的一种方法。本研究的目的是研究人类杆状双极细胞反应的特征(通过暗位ERG b波测量)与心理物理测量的临界融合频率(CFF)之间的关系。刺激包括微弱的(-0.04 Td x s),蓝色闪光可以单独出现,也可以作为闪光对出现(在5到300 ms的时间间隔范围内)。双强度(-0.08 Td x s)的单次闪光也作为参考。通过(1)记录ERG b波来测量对闪光对的视觉反应,(2)使用两种选择的强迫选择方法(闪烁与融合照明)确定CFF的阈值。实验结果表明,间隔< 100 ms的闪烁对的b波响应在电生理学上与单次双强度闪烁的b波响应相似。在心理物理上,间隔< 100 ms的闪光对的感知出现融合。总之,视觉系统区分暗位刺激的能力可能是由杆状双极细胞的反应特性决定的,也可能是由杆状光感受器本身决定的。
{"title":"Features of the human rod bipolar cell ERG response during fusion of scotopic flicker.","authors":"Allison M Cameron,&nbsp;Jacqueline S C Lam","doi":"10.1163/187847612x648792","DOIUrl":"https://doi.org/10.1163/187847612x648792","url":null,"abstract":"<p><p>The ability of the eye to distinguish between intermittently presented flash stimuli is a measure of the temporal resolution of vision. The aim of this study was to examine the relationship between the features of the human rod bipolar cell response (as measured from the scotopic ERG b-wave) and the psychophysically measured critical fusion frequency (CFF). Stimuli consisted of dim (-0.04 Td x s), blue flashes presented either singly, or as flash pairs (at a range of time separations, between 5 and 300 ms). Single flashes of double intensity (-0.08 Td x s) were also presented as a reference. Visual responses to flash pairs were measured via (1) recording of the ERG b-wave, and (2) threshold determinations of the CFF using a two-alternative forced-choice method (flicker vs. fused illumination). The results of this experiment suggest that b-wave responses to flash pairs separated by < 100 ms are electrophysiologically similar to those obtained with single flashes of double intensity. Psychophysically, the percepts of flash pairs < 100 ms apart appeared fused. In conclusion, the visual system's ability to discriminate between scotopic stimuli may be determined by the response characteristics of the rod bipolar cell, or perhaps by the rod photoreceptor itself.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 6","pages":"545-60"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612x648792","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40138004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Combining fiber tracking and functional brain imaging for revealing brain networks involved in auditory–visual integration in humans 结合纤维追踪和功能性脑成像来揭示人类听觉-视觉整合的脑网络
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646280
A. Beer, Tina Plank, Evangelia-Regkina Symeonidou, G. Meyer, M. Greenlee
Previous functional magnetic resonance imaging (MRI) found various brain areas in the temporal and occipital lobe involved in integrating auditory and visual object information. Fiber tracking based on diffusion-weighted MRI suggested neuroanatomical connections between auditory cortex and sub-regions of the temporal and occipital lobe. However, the relationship between functional activity and white-matter tracks remained unclear. Here, we combined probabilistic tracking and functional MRI in order to reveal the structural connections related to auditory–visual object perception. Ten healthy people were examined by diffusion-weighted and functional MRI. During functional examinations they viewed either movies of lip or body movements, listened to corresponding sounds (phonological sounds or body action sounds), or a combination of both. We found that phonological sounds elicited stronger activity in the lateral superior temporal gyrus (STG) than body action sounds. Body movements elicited stronger activity in the lateral occipital cortex than lip movements. Functional activity in the phonological STG region and the lateral occipital body area were mutually modulated (sub-additive) by combined auditory–visual stimulation. Moreover, bimodal stimuli engaged a region in the posterior superior temporal sulcus (STS). Probabilistic tracking revealed white-matter tracks between the auditory cortex and sub-regions of the STS (anterior and posterior) and occipital cortex. The posterior STS region was also found to be relevant for auditory–visual object perception. The anterior STS region showed connections to the phonological STG area and to the lateral occipital body area. Our findings suggest that multisensory networks in the temporal lobe are best revealed by combining functional and structural measures.
先前的功能性磁共振成像(MRI)发现,颞叶和枕叶的不同大脑区域参与了听觉和视觉对象信息的整合。基于弥散加权MRI的纤维跟踪显示听觉皮层与颞叶和枕叶亚区之间存在神经解剖学上的联系。然而,功能活动和白质轨迹之间的关系尚不清楚。在这里,我们结合概率跟踪和功能性MRI来揭示与听觉-视觉物体感知相关的结构连接。对10名健康人进行弥散加权和功能性MRI检查。在功能测试中,他们观看嘴唇或身体运动的电影,听相应的声音(语音或身体动作的声音),或两者的结合。我们发现语音比肢体动作语音在颞上外侧回(STG)中引起更强的活动。身体运动比嘴唇运动更能激发枕骨外侧皮层的活动。在听觉-视觉联合刺激下,STG语音区和枕侧体区的功能活动相互调节(亚加性)。此外,双峰刺激涉及颞后上沟(STS)的一个区域。概率跟踪显示听觉皮层与STS亚区(前、后)和枕叶皮层之间的白质轨迹。后侧STS区域也被发现与听觉-视觉物体感知有关。前侧STG区与语音STG区和枕侧体区有连接。我们的研究结果表明,颞叶中的多感觉网络最好通过结合功能和结构测量来揭示。
{"title":"Combining fiber tracking and functional brain imaging for revealing brain networks involved in auditory–visual integration in humans","authors":"A. Beer, Tina Plank, Evangelia-Regkina Symeonidou, G. Meyer, M. Greenlee","doi":"10.1163/187847612X646280","DOIUrl":"https://doi.org/10.1163/187847612X646280","url":null,"abstract":"Previous functional magnetic resonance imaging (MRI) found various brain areas in the temporal and occipital lobe involved in integrating auditory and visual object information. Fiber tracking based on diffusion-weighted MRI suggested neuroanatomical connections between auditory cortex and sub-regions of the temporal and occipital lobe. However, the relationship between functional activity and white-matter tracks remained unclear. Here, we combined probabilistic tracking and functional MRI in order to reveal the structural connections related to auditory–visual object perception. Ten healthy people were examined by diffusion-weighted and functional MRI. During functional examinations they viewed either movies of lip or body movements, listened to corresponding sounds (phonological sounds or body action sounds), or a combination of both. We found that phonological sounds elicited stronger activity in the lateral superior temporal gyrus (STG) than body action sounds. Body movements elicited stronger activity in the lateral occipital cortex than lip movements. Functional activity in the phonological STG region and the lateral occipital body area were mutually modulated (sub-additive) by combined auditory–visual stimulation. Moreover, bimodal stimuli engaged a region in the posterior superior temporal sulcus (STS). Probabilistic tracking revealed white-matter tracks between the auditory cortex and sub-regions of the STS (anterior and posterior) and occipital cortex. The posterior STS region was also found to be relevant for auditory–visual object perception. The anterior STS region showed connections to the phonological STG area and to the lateral occipital body area. Our findings suggest that multisensory networks in the temporal lobe are best revealed by combining functional and structural measures.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"5-5"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646280","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Investigating task and modality switching costs using bimodal stimuli 使用双峰刺激调查任务和模态转换成本
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646451
Rajwant Sandhu, B. Dyson
Investigations of concurrent task and modality switching effects have to date been studied under conditions of uni-modal stimulus presentation. As such, it is difficult to directly compare resultant task and modality switching effects, as the stimuli afford both tasks on each trial, but only one modality. The current study investigated task and modality switching using bi-modal stimulus presentation under various cue conditions: task and modality (double cue), either task or modality (single cue) or no cue. Participants responded to either the identity or the position of an audio–visual stimulus. Switching effects were defined as staying within a modality/task (repetition) or switching into a modality/task (change) from trial n − 1 to trial n, with analysis performed on trial n data. While task and modality switching costs were sub-additive across all conditions replicating previous data, modality switching effects were dependent on the modality being attended, and task switching effects were dependent on the task being performed. Specifically, visual responding and position responding revealed significant costs associated with modality and task switching, while auditory responding and identity responding revealed significant gains associated with modality and task switching. The effects interacted further, revealing that costs and gains associated with task and modality switching varying with the specific combination of modality and task type. The current study reconciles previous data by suggesting that efficiently processed modality/task information benefits from repetition while less efficiently processed information benefits from change due to less interference of preferred processing across consecutive trials.
迄今为止,在单模态刺激条件下的并发任务和模态转换效应的研究已经完成。因此,很难直接比较结果任务和模态转换效应,因为刺激在每次试验中都提供两个任务,但只有一个模态。本研究采用双模态刺激呈现方法研究了不同提示条件下的任务和模态转换:任务和模态(双提示)、任务或模态(单提示)或无提示。参与者对视听刺激的身份或位置作出反应。切换效应定义为从试验n−1到试验n保持在一个模态/任务内(重复)或切换到一个模态/任务(变化),并对试验n的数据进行分析。虽然在复制先前数据的所有条件下,任务和模态转换成本是次相加的,但模态转换的效果取决于所参加的模态,任务转换的效果取决于所执行的任务。具体而言,视觉反应和位置反应显示了与模态和任务转换相关的显著成本,而听觉反应和身份反应显示了与模态和任务转换相关的显著收益。这些影响进一步相互作用,揭示了任务和模式转换相关的成本和收益随着模式和任务类型的具体组合而变化。本研究与以往的数据一致,表明有效处理的模态/任务信息受益于重复,而低效率处理的信息受益于连续试验中首选处理干扰较少的变化。
{"title":"Investigating task and modality switching costs using bimodal stimuli","authors":"Rajwant Sandhu, B. Dyson","doi":"10.1163/187847612X646451","DOIUrl":"https://doi.org/10.1163/187847612X646451","url":null,"abstract":"Investigations of concurrent task and modality switching effects have to date been studied under conditions of uni-modal stimulus presentation. As such, it is difficult to directly compare resultant task and modality switching effects, as the stimuli afford both tasks on each trial, but only one modality. The current study investigated task and modality switching using bi-modal stimulus presentation under various cue conditions: task and modality (double cue), either task or modality (single cue) or no cue. Participants responded to either the identity or the position of an audio–visual stimulus. Switching effects were defined as staying within a modality/task (repetition) or switching into a modality/task (change) from trial n − 1 to trial n, with analysis performed on trial n data. While task and modality switching costs were sub-additive across all conditions replicating previous data, modality switching effects were dependent on the modality being attended, and task switching effects were dependent on the task being performed. Specifically, visual responding and position responding revealed significant costs associated with modality and task switching, while auditory responding and identity responding revealed significant gains associated with modality and task switching. The effects interacted further, revealing that costs and gains associated with task and modality switching varying with the specific combination of modality and task type. The current study reconciles previous data by suggesting that efficiently processed modality/task information benefits from repetition while less efficiently processed information benefits from change due to less interference of preferred processing across consecutive trials.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"22-22"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4 year olds localize tactile stimuli using an external frame of reference 4岁儿童使用外部参照系定位触觉刺激
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646631
Jannath Begum, A. Bremner, Dorothy Cowie
Adults show a deficit in their ability to localize tactile stimuli to their hands when their arms are in the less familiar, crossed posture (e.g., Overvliet et al., 2011; Shore et al., 2002). It is thought that this ‘crossed-hands effect’ arises due to conflict (when the hands are crossed) between the anatomical and external frames of reference within which touches can be perceived. Pagel et al. (2009) studied this effect in young children and observed that the crossed-hands effect first emerges after 5.5-years. In their task, children were asked to judge the temporal order of stimuli presented across their hands in quick succession. Here, we present the findings of a simpler task in which children were asked to localize a single vibrotactile stimulus presented to either hand. We also compared the effect of posture under conditions in which children either did, or did not, have visual information about current hand posture. With this method, we observed a crossed-hands effect in the youngest age-group testable; 4-year-olds. We conclude that young children localize tactile stimuli with respect to an external frame of reference from early in childhood or before (cf. Bremner et al., 2008). Additionally, when visual information about posture was made available, 4- to 5-year-olds’ tactile localization accuracy in the uncrossed-hands posture deteriorated and the crossed-hands effect disappeared. We discuss these findings with respect to visual–tactile-proprioceptive integration abilities of young children and examine potential sources of the discrepancies between our findings and those of Pagel et al. (2009).
当成年人的手臂处于不太熟悉的交叉姿势时,他们将触觉刺激定位到手部的能力存在缺陷(例如,Overvliet et al., 2011;Shore et al., 2002)。据认为,这种“双手交叉效应”是由于解剖学和外部参照系之间的冲突(当双手交叉时)而产生的,在这些参照系中,触摸可以被感知。Pagel et al.(2009)在幼儿中研究了这种效应,并观察到双手交叉效应在5.5岁后首次出现。在他们的任务中,孩子们被要求判断快速连续出现在他们手上的刺激的时间顺序。在这里,我们展示了一个更简单的任务的发现,在这个任务中,孩子们被要求定位一个单一的振动触觉刺激,这个刺激呈现在他们的任何一只手上。我们还比较了两种情况下姿势的影响,一种是儿童对当前的手部姿势有视觉信息,另一种是没有。通过这种方法,我们在最年轻的年龄组中观察到交叉手效应;4岁的孩子。我们得出的结论是,幼儿在童年早期或更早的时候就会根据外部参考框架定位触觉刺激(参见Bremner et al., 2008)。此外,当提供姿势的视觉信息时,4 ~ 5岁儿童在非双手交叉姿势下的触觉定位精度下降,双手交叉效应消失。我们讨论了这些关于幼儿视觉-触觉-本体感觉整合能力的发现,并检查了我们的发现与Pagel等人(2009)之间差异的潜在来源。
{"title":"4 year olds localize tactile stimuli using an external frame of reference","authors":"Jannath Begum, A. Bremner, Dorothy Cowie","doi":"10.1163/187847612X646631","DOIUrl":"https://doi.org/10.1163/187847612X646631","url":null,"abstract":"Adults show a deficit in their ability to localize tactile stimuli to their hands when their arms are in the less familiar, crossed posture (e.g., Overvliet et al., 2011; Shore et al., 2002). It is thought that this ‘crossed-hands effect’ arises due to conflict (when the hands are crossed) between the anatomical and external frames of reference within which touches can be perceived. Pagel et al. (2009) studied this effect in young children and observed that the crossed-hands effect first emerges after 5.5-years. In their task, children were asked to judge the temporal order of stimuli presented across their hands in quick succession. Here, we present the findings of a simpler task in which children were asked to localize a single vibrotactile stimulus presented to either hand. We also compared the effect of posture under conditions in which children either did, or did not, have visual information about current hand posture. With this method, we observed a crossed-hands effect in the youngest age-group testable; 4-year-olds. We conclude that young children localize tactile stimuli with respect to an external frame of reference from early in childhood or before (cf. Bremner et al., 2008). Additionally, when visual information about posture was made available, 4- to 5-year-olds’ tactile localization accuracy in the uncrossed-hands posture deteriorated and the crossed-hands effect disappeared. We discuss these findings with respect to visual–tactile-proprioceptive integration abilities of young children and examine potential sources of the discrepancies between our findings and those of Pagel et al. (2009).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"41-41"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646631","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictable variations in auditory pitch modulate the spatial processing of visual stimuli: An ERP study 听觉音调的可预测变化调节视觉刺激的空间加工:一项ERP研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646488
Fátima Vera-Constán, Irune Fernández-Prieto, Joel García-Morera, J. Navarra
We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants performed a color discrimination task of a visual target that could appear either above or below a centrally-presented fixation point. Each experimental trial started with an auditory isochronous stream of 11 tones including a high- and a low-pitched tone. The visual target appeared isochronously after the last tone. In the ‘non-predictive’ condition, the tones were presented in an erratic fashion (e.g., ‘high-low-low-high-high-low-high …’). In the ‘predictive condition’, the melodic combination of high- and low-pitched tones was highly predictable (e.g., ‘low-high-low-high-low …’). Within the predictive condition, the visual stimuli appeared congruently or incongruently with respect to the melody (‘… low-high-low-high-low-UP’ or ‘… low-high-low-high-low-DOWN’, respectively). Participants showed faster responses when the visual target appeared after a predictive melody. Electrophysiologically, early (25–150 ms) amplitude effects of predictability were observed in frontal and parietal regions, spreading to central regions (N1) afterwards. Predictability effects were also found in the P2–N2 complex and the P3 in central and parietal regions. Significant auditory-to-visual congruency effects were also observed in the parieto-occipital P3 component. Our findings reveal the existence of crossmodal effects of perceiving auditory isochronous melodies on visual temporal orienting. More importantly, our results suggest that pitch information can be transformed into a spatial code that shapes the spatial processing in other modalities such as vision.
我们研究了能否通过声音的“空间重新编码”来感知可预测的音高“起伏”(就像在音乐旋律中听到的那样),从而影响视觉刺激的空间处理(见Foster和Zatorre, 2010;Rusconi et al., 2006)。当参与者对一个可能出现在中央注视点上方或下方的视觉目标进行颜色辨别任务时,记录了事件相关电位(ERPs)。每个实验都以11个音调的听觉同步流开始,包括一个高音和一个低音。在最后一个音调之后,视觉目标同步出现。在“非预测性”条件下,音调以不稳定的方式呈现(例如,“高-低-低-高-高-低-高……”)。在“预测条件”中,高音和低音的旋律组合是高度可预测的(例如,“低-高-低-高-低……”)。在预测条件下,视觉刺激与旋律一致或不一致(分别是“…低-高-低-高-低-上”或“…低-高-低-低-下”)。当视觉目标在预测旋律后出现时,参与者表现出更快的反应。电生理学上,可预测性的早期(25-150 ms)振幅效应在额叶和顶叶区域被观察到,随后扩散到中央区域(N1)。在P2-N2复合体以及中部和顶叶区的P3中也发现了可预测性效应。在顶枕P3部分也观察到显著的听觉视觉一致性效应。我们的研究结果揭示了听觉等时性旋律感知对视觉时间定向存在跨模效应。更重要的是,我们的研究结果表明,音高信息可以转化为空间编码,从而影响视觉等其他方式的空间处理。
{"title":"Predictable variations in auditory pitch modulate the spatial processing of visual stimuli: An ERP study","authors":"Fátima Vera-Constán, Irune Fernández-Prieto, Joel García-Morera, J. Navarra","doi":"10.1163/187847612X646488","DOIUrl":"https://doi.org/10.1163/187847612X646488","url":null,"abstract":"We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants performed a color discrimination task of a visual target that could appear either above or below a centrally-presented fixation point. Each experimental trial started with an auditory isochronous stream of 11 tones including a high- and a low-pitched tone. The visual target appeared isochronously after the last tone. In the ‘non-predictive’ condition, the tones were presented in an erratic fashion (e.g., ‘high-low-low-high-high-low-high …’). In the ‘predictive condition’, the melodic combination of high- and low-pitched tones was highly predictable (e.g., ‘low-high-low-high-low …’). Within the predictive condition, the visual stimuli appeared congruently or incongruently with respect to the melody (‘… low-high-low-high-low-UP’ or ‘… low-high-low-high-low-DOWN’, respectively). Participants showed faster responses when the visual target appeared after a predictive melody. Electrophysiologically, early (25–150 ms) amplitude effects of predictability were observed in frontal and parietal regions, spreading to central regions (N1) afterwards. Predictability effects were also found in the P2–N2 complex and the P3 in central and parietal regions. Significant auditory-to-visual congruency effects were also observed in the parieto-occipital P3 component. Our findings reveal the existence of crossmodal effects of perceiving auditory isochronous melodies on visual temporal orienting. More importantly, our results suggest that pitch information can be transformed into a spatial code that shapes the spatial processing in other modalities such as vision.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"36 1","pages":"25-25"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646488","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception 在视听时间感知上评估辅音和元音发音的视听显著性和视觉信息内容
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646514
A. Vatakis, C. Spence
Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).
研究表明,不同语音符号之间和内部的时间整合窗口是不同的。迄今为止,有限的语音标记测试还不能正确地评估这些差异是由任务还是刺激驱动的?我们进行了一系列的实验来研究与语音发音相关的身体差异如何影响视听语音感知的时间方面。介绍了三位演讲者发出的辅音和元音的视频。参与者对哪个语音流先出现进行时间顺序判断(TOJs)。研究分析了参与者的toj和主观同时性点(PSS)的敏感性与辅音的位置、发音方式和发声方式,以及元音的舌头高度/后部和唇圆度的关系。结果表明,在发音位置/圆度的情况下,参与者对PSS上具有较小视觉导联的高度突出语音信号的时间顺序更为敏感。当评估发音方式/高度时,情况并非如此。这些发现表明,视觉语音信号为听觉信号提供了大量线索,听觉信号调节了感知语音流所需的相对处理时间。随后的实验探讨了不同视觉信息来源的呈现如何调节这些发现。三个辅音的视频在自然和点光(PL)观看条件下呈现,显示部分或整个面部。初步分析显示,在不同的观看条件下,TOJ精度没有差异。然而,PSS数据显示,根据发出的语音标记,观看条件存在显着差异(例如,唇部/牙齿/舌头的视觉导联较大)。
{"title":"Assessing audiovisual saliency and visual-information content in the articulation of consonants and vowels on audiovisual temporal perception","authors":"A. Vatakis, C. Spence","doi":"10.1163/187847612X646514","DOIUrl":"https://doi.org/10.1163/187847612X646514","url":null,"abstract":"Research has revealed different temporal integration windows between and within different speech-tokens. The limited speech-tokens tested to date has not allowed for the proper evaluation of whether such differences are task or stimulus driven? We conducted a series of experiments to investigate how the physical differences associated with speech articulation affect the temporal aspects of audiovisual speech perception. Videos of consonants and vowels uttered by three speakers were presented. Participants made temporal order judgments (TOJs) regarding which speech-stream had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. The results demonstrated that for the case of place of articulation/roundedness, participants were more sensitive to the temporal order of highly-salient speech-signals with smaller visual-leads at the PSS. This was not the case when the manner of articulation/height was evaluated. These findings suggest that the visual-speech signal provides substantial cues to the auditory-signal that modulate the relative processing times required for the perception of the speech-stream. A subsequent experiment explored how the presentation of different sources of visual-information modulated such findings. Videos of three consonants were presented under natural and point-light (PL) viewing conditions revealing parts, or the whole, face. Preliminary analysis revealed no differences in TOJ accuracy under different viewing conditions. However, the PSS data revealed significant differences in viewing conditions depending on the speech token uttered (e.g., larger visual-leads for PL-lip/teeth/tongue-only views).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"29-29"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646514","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Somatosensory amplification and illusory tactile sensations 体感放大和虚幻触觉
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646569
Vrushant Lakhlani, Kirsten J. McKenzie
Experimental studies have demonstrated that it is possible to induce convincing bodily distortions in neurologically healthy individuals, through cross-modal manipulations; such as the rubber hand illusion (Botvinick and Cohen, 1998), the parchment skin illusion (Jousmaki and Hari, 1998) and the Somatic Signal Detection Task (SSDT; Lloyd et al., 2008). It has been shown previously with the SSDT that when a tactile stimulus is presented with a simultaneous light flash, individuals show both increased sensitivity to the tactile stimulus, and the tendency to report feeling the stimulus even when one was not presented; a tendency which varies greatly between individuals but remains constant over time within an individual (McKenzie et al., 2010). Further studies into tactile stimulus discrimination using the Somatic Signal Discrimination Task (SSDiT) have also shown that a concurrent light led to a significant improvement in people’s ability to discriminate ‘weak’ tactile stimuli from ‘strong’ ones, as well as a bias towards reporting any tactile stimulus as ‘strong’ (Poliakoff et al., in preparation), indicating that the light may influence both early and later stages of processing. The current study investigated whether the tendency to report higher numbers of false alarms when carrying out the SSDT is correlated with the tendency to experience higher numbers of cross-modal ‘enhancements’ of weak tactile signals (leading to classifications of ‘weak’ stimuli as strong, and ‘strong’ stimuli as ‘stronger’). Results will be discussed.
实验研究表明,通过跨模态操作,可以在神经健康的个体中诱导令人信服的身体扭曲;如橡胶手错觉(Botvinick and Cohen, 1998)、羊皮纸皮肤错觉(Jousmaki and Hari, 1998)和躯体信号检测任务(SSDT;Lloyd et al., 2008)。先前的SSDT已经表明,当触觉刺激与同时出现的闪光同时出现时,个体对触觉刺激的敏感性增加,并且即使没有出现刺激,也倾向于报告感觉到刺激;这种趋势在个体之间差异很大,但在个体内部随时间保持不变(McKenzie et al., 2010)。使用躯体信号辨别任务(SSDiT)对触觉刺激辨别的进一步研究也表明,同时的光导致人们区分“弱”触觉刺激和“强”触觉刺激的能力显著提高,以及倾向于将任何触觉刺激报告为“强”(Poliakoff等人,in preparation),这表明光可能影响处理的早期和后期阶段。目前的研究调查了在进行SSDT时报告更多假警报的倾向是否与经历更多弱触觉信号的跨模态“增强”的倾向相关(导致将“弱”刺激分类为强,将“强”刺激分类为“强”)。结果将被讨论。
{"title":"Somatosensory amplification and illusory tactile sensations","authors":"Vrushant Lakhlani, Kirsten J. McKenzie","doi":"10.1163/187847612X646569","DOIUrl":"https://doi.org/10.1163/187847612X646569","url":null,"abstract":"Experimental studies have demonstrated that it is possible to induce convincing bodily distortions in neurologically healthy individuals, through cross-modal manipulations; such as the rubber hand illusion (Botvinick and Cohen, 1998), the parchment skin illusion (Jousmaki and Hari, 1998) and the Somatic Signal Detection Task (SSDT; Lloyd et al., 2008). It has been shown previously with the SSDT that when a tactile stimulus is presented with a simultaneous light flash, individuals show both increased sensitivity to the tactile stimulus, and the tendency to report feeling the stimulus even when one was not presented; a tendency which varies greatly between individuals but remains constant over time within an individual (McKenzie et al., 2010). Further studies into tactile stimulus discrimination using the Somatic Signal Discrimination Task (SSDiT) have also shown that a concurrent light led to a significant improvement in people’s ability to discriminate ‘weak’ tactile stimuli from ‘strong’ ones, as well as a bias towards reporting any tactile stimulus as ‘strong’ (Poliakoff et al., in preparation), indicating that the light may influence both early and later stages of processing. The current study investigated whether the tendency to report higher numbers of false alarms when carrying out the SSDT is correlated with the tendency to experience higher numbers of cross-modal ‘enhancements’ of weak tactile signals (leading to classifications of ‘weak’ stimuli as strong, and ‘strong’ stimuli as ‘stronger’). Results will be discussed.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"34-34"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial codes for movement coordination do not depend on developmental vision 运动协调的空间编码并不依赖于发育视觉
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646721
T. Heed, B. Roeder
When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.
当人们在掌心向下的情况下用两个食指左右摆动时,他们发现手指对称移动(即两个手指向中,然后两个手指向外)比平行移动(即两个手指向左,然后两个手指向右)更容易。最初提出这种效应是由于两只手的同源肌肉同时激活。然而,当一只手掌心朝上时,对称运动也更容易,因此需要同时使用相反的肌肉而不是同源的肌肉。这被解释为表明运动协调依赖于感知而不是基于肌肉的信息(Mechsner et al., 2001)。本实验测试了该任务中使用的空间编码是否依赖于视觉。参与者用他们的两个食指做对称或平行的左右运动,而他们的手掌要么都朝下,要么都朝上,要么一个朝上,一个朝下。无论是视力正常的参与者还是先天失明的参与者,他们的动作执行都不依赖于手的姿势。相反,两组人在对外部空间进行对称而不是平行运动时总是更有效率。我们得出的结论是,用于运动协调的空间代码并不完全取决于视觉。此外,先天失明的人在感知任务中主要使用基于身体的(体位)空间编码(Roder等,2007),而他们在运动任务中使用外部空间编码,其表现与视力正常的人无异。
{"title":"Spatial codes for movement coordination do not depend on developmental vision","authors":"T. Heed, B. Roeder","doi":"10.1163/187847612X646721","DOIUrl":"https://doi.org/10.1163/187847612X646721","url":null,"abstract":"When people make oscillating right–left movements with their two index fingers while holding their hands palms down, they find it easier to move the fingers symmetrically (i.e., both fingers towards the middle, then both fingers to the outside) than parallel (i.e., both fingers towards the left, then both fingers towards the right). It was originally proposed that this effect is due to concurrent activation of homologous muscles in the two hands. However, symmetric movements are also easier when one of the hands is turned palm up, thus requiring concurrent use of opposing rather than homologous muscles. This was interpreted to indicate that movement coordination relies on perceptual rather than muscle-based information (Mechsner et al., 2001). The current experiment tested whether the spatial code used in this task depends on vision. Participants made either symmetrical or parallel right–left movements with their two index fingers while their palms were either both facing down, both facing up, or one facing up and one down. Neither in sighted nor in congenitally blind participants did movement execution depend on hand posture. Rather, both groups were always more efficient when making symmetrical rather than parallel movements with respect to external space. We conclude that the spatial code used for movement coordination does not crucially depend on vision. Furthermore, whereas congenitally blind people predominately use body-based (somatotopic) spatial coding in perceptual tasks (Roder et al., 2007), they use external spatial codes in movement tasks, with performance indistinguishable from the sighted.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"51-51"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1