首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
The spatial distribution of auditory attention in early blindness 早期失明听觉注意的空间分布
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646767
Elodie Lerens, L. Renier, A. Volder
Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.
早期失明的人通过发展其他感官的优越能力来弥补他们的视力不足,比如听觉(Collignon et al., 2006;Gougoux et al., 2004;Wan et al., 2010)。先前的研究报告了听觉空间注意的超常能力,特别是与额叶刺激相比,外围刺激的定位能力(Lessard et al., 1998;Roder et al., 1999)。然而,这种特殊的超常能力是否延伸到非空间注意力领域尚不清楚。在这里,我们比较了早期失明受试者和蒙住眼睛的正常对照组在听觉非空间注意任务中的表现:根据音调频率在干扰物中检测目标。我们特别关注了声源位置的潜在影响,比较了周边空间和正面空间目标检测的精度和速度。对于外周和额部刺激,失明受试者的反应时间都比视力正常的对照组短。此外,在两组被试中,我们观察到目标位置与干扰物位置之间存在交互作用:当目标位置与干扰物位置不同时,目标被检测的速度更快。然而,这种效应在早期失明受试者中减弱,在有额叶目标和外周干扰物的情况下甚至消失。我们的结论是,早期失明的人通过增强他们处理听觉信息的能力来弥补视力的不足,同时也通过改变他们听觉注意力资源的空间分布。
{"title":"The spatial distribution of auditory attention in early blindness","authors":"Elodie Lerens, L. Renier, A. Volder","doi":"10.1163/187847612X646767","DOIUrl":"https://doi.org/10.1163/187847612X646767","url":null,"abstract":"Early blind people compensate for their lack of vision by developing superior abilities in the remaining senses such as audition (Collignon et al., 2006; Gougoux et al., 2004; Wan et al., 2010). Previous studies reported supra-normal abilities in auditory spatial attention, particularly for the localization of peripheral stimuli in comparison with frontal stimuli (Lessard et al., 1998; Roder et al., 1999). However, it is unknown whether this specific supra-normal ability extends to the non-spatial attention domain. Here we compared the performance of early blind subjects and sighted controls, who were blindfolded, during an auditory non-spatial attention task: target detection among distractors according to tone frequency. We paid a special attention to the potential effect of the sound source location, comparing the accuracy and speed in target detection in the peripheral and frontal space. Blind subjects displayed shorter reaction times than sighted controls for both peripheral and frontal stimuli. Moreover, in the two groups of subjects, we observed an interaction effect between the target location and the distractors location: the target was detected faster when its location was different from the location of the distractors. However, this effect was attenuated in early blind subjects and even cancelled in the condition with frontal targets and peripheral distractors. We conclude that early blind people compensate for the lack of vision by enhancing their ability to process auditory information but also by changing the spatial distribution of their auditory attention resources.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"41 1","pages":"55-55"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646767","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system 事件相关电位的恢复期,表明视觉、听觉和触觉系统之间的跨模态相互作用
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647478
Marlene Hense, Boukje Habets, B. Roeder
In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.
在顺序单峰刺激设计中,事件相关电位(ERP)振幅恢复所需的时间通常被解释为产生皮层回路反应性的短暂衰减。这种效应被称为神经耐火度,重复刺激越相似,神经耐火度越大,从而表明被两个连续刺激激活的神经产生系统之间的重叠程度。我们假设,在一个交叉模态序列设计中,交叉模态折射效应可能是一个很好的参数来评估所涉及的神经发生器的“模态重叠”和交叉模态相互作用的程度。为了研究跨模态ERP不应期效应,我们分别以1秒和2秒为刺激间隔,对成年被试进行了视觉和听觉刺激(实验1)和视觉和触觉刺激(实验2)。参与者必须检测罕见的听觉和视觉刺激。在三个被调查的erp偏转(P1, N1, P2)中,发现了所有模式的内和跨模ISI效应。实验1中N1-和p2偏转以及实验2中P1和N1偏转的跨模不应期效应的地形与相应的模内不应期效应相似,但越受限,跨模不应期效应一般越弱。然而,相对于模态内效应,视觉P1的跨模态难阻效应具有明显的、较少限制的地形。这些结果表明,ERP难解效应可能是跨模态相互作用的神经相关指标。
{"title":"Recovery periods of event-related potentials indicating crossmodal interactions between the visual, auditory and tactile system","authors":"Marlene Hense, Boukje Habets, B. Roeder","doi":"10.1163/187847612X647478","DOIUrl":"https://doi.org/10.1163/187847612X647478","url":null,"abstract":"In sequential unimodal stimulus designs the time it takes for an event-related potential (ERP)-amplitude to recover is often interpreted as a transient decrement in responsiveness of the generating cortical circuits. This effect has been called neural refractoriness, which is the larger the more similar the repeated stimuli are and thus indicates the degree of overlap between the neural generator systems activated by two sequential stimuli. We hypothesize that crossmodal refractoriness-effects in a crossmodal sequential design might be a good parameter to assess the ‘modality overlap’ in the involved neural generators and the degree of crossmodal interaction. In order to investigate crossmodal ERP refractory period effects we presented visual and auditory (Experiment 1) and visual and tactile stimuli (Experiment 2) with inter stimulus intervals of 1 and 2 s to adult participants. Participants had to detect rare auditory and visual stimuli. Both, intra- and crossmodal ISI effects for all modalities were found for three investigated ERP-deflections (P1, N1, P2). The topography of the crossmodal refractory period effect of the N1- and P2-deflections in Experiment 1 and of P1 and N1 in Experiment 2 of both modalities was similar to the corresponding intramodal refractory effect, yet more confined and crossmodal effects were generally weaker. The crossmodal refractory effect for the visual P1, however, had a distinct, less circumscribed topography with respect to the intramodal effect. These results suggest that ERP refractory effects might be a promising indicator of the neural correlates of crossmodal interactions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"9 1","pages":"114-114"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity 异质视听整合:音高、频带宽度和视觉偏心率的影响
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647081
A. Thelen, M. Murray
The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.
最近,对人类大脑反应的早期潜伏期和低水平非线性相互作用的观察,以及对人类和猴子多感觉刺激对行为/表现的促进作用的观察,补充了对非人灵长类动物初级皮层之间单突触连接的识别。虽然有一些证据支持低水平皮层内的早期潜伏期相互作用与行为促进之间的因果关系,但尚不清楚这种影响是否由初级皮层之间的直接解剖联系所支持。在非人类灵长类动物中,来自初级听觉皮层的上述单突触投射终止于初级视觉皮层的外周视野表征,这表明偏心视觉刺激和纯音(相对于宽带)声音的整合存在潜在的偏见。迄今为止,人类(和猴子)的行为效应已经被观察到,在用一系列从纯音到噪音爆发的听觉刺激来刺激中央凹后。本研究旨在确定听觉-视觉刺激整合的异质性。为此,我们采用了3 × 3受试者设计,在受试者完成简单检测任务的同时,改变多感官刺激的视觉偏心率(2.5°、5.7°、8.9°)和听觉音高(250、1000、4000 Hz)。我们还在受试者完成的实验块之间改变了听觉带宽(纯音与粉红噪声)。为了确保对两种模式的关注,多感觉刺激在单感觉视觉和单感觉听觉试验中是等可能的,它们本身沿着上述维度变化。测量了每种刺激条件的中位反应时间,以及每种多感觉条件与最佳成分单感觉条件的百分比增益/损失。初步结果表明,多感觉相互作用(从简单反应时间测量)在测试维度上确实是异质的,这可能为界定行为相关的早潜伏期神经反应相互作用的解剖功能基础提供了一种方法。有趣的是,初步结果表明,宽频刺激会对视觉刺激产生选择性互动,而纯音刺激则不会。更准确地说,当集中呈现的视觉刺激与嵌入在粉红色噪声中的高音调相结合时,多感官促进指数最高,而在约5.7°视角下呈现的视觉刺激反应时间最慢。
{"title":"Heterogeneous auditory–visual integration: Effects of pitch, band-width and visual eccentricity","authors":"A. Thelen, M. Murray","doi":"10.1163/187847612X647081","DOIUrl":"https://doi.org/10.1163/187847612X647081","url":null,"abstract":"The identification of monosynaptic connections between primary cortices in non-human primates has recently been complemented by observations of early-latency and low-level non-linear interactions in brain responses in humans as well as observations of facilitative effects of multisensory stimuli on behavior/performance in both humans and monkeys. While there is some evidence in favor of causal links between early–latency interactions within low-level cortices and behavioral facilitation, it remains unknown if such effects are subserved by direct anatomical connections between primary cortices. In non-human primates, the above monosynaptic projections from primary auditory cortex terminate within peripheral visual field representations within primary visual cortex, suggestive of there being a potential bias for the integration of eccentric visual stimuli and pure tone (vs. broad-band) sounds. To date, behavioral effects in humans (and monkeys) have been observed after presenting (para)foveal stimuli with any of a range of auditory stimuli from pure tones to noise bursts. The present study aimed to identify any heterogeneity in the integration of auditory–visual stimuli. To this end, we employed a 3 × 3 within subject design that varied the visual eccentricity of an annulus (2.5°, 5.7°, 8.9°) and auditory pitch (250, 1000, 4000 Hz) of multisensory stimuli while subjects completed a simple detection task. We also varied the auditory bandwidth (pure tone vs. pink noise) across blocks of trials that a subject completed. To ensure attention to both modalities, multisensory stimuli were equi-probable with both unisensory visual and unisensory auditory trials that themselves varied along the abovementioned dimensions. Median reaction times for each stimulus condition as well as the percentage gain/loss of each multisensory condition vs. the best constituent unisensory condition were measured. The preliminary results reveal that multisensory interactions (as measured from simple reaction times) are indeed heterogeneous across the tested dimensions and may provide a means for delimiting the anatomo-functional substrates of behaviorally-relevant early–latency neural response interactions. Interestingly, preliminary results suggest selective interactions for visual stimuli when presented with broadband stimuli but not when presented with pure tones. More precisely, centrally presented visual stimuli show the greatest index of multisensory facilitation when coupled to a high pitch tone embedded in pink noise, while visual stimuli presented at approximately 5.7° of visual angle show the greatest slowing of reaction times.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"89-89"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations 联觉脑中的多感觉过程——多感觉竞争情境下的事件相关电位研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647333
J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik
In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.
在联觉中,某些外部刺激(如音乐)会自动触发内部产生的感觉(如颜色)。行为研究结果表明,联觉者的多感觉处理工作方式不同。然而,造成这些差异的原因和潜在的神经关联尚不清楚。当前研究的目的是调查联觉者是否在多模态处理的电生理成分上表现出差异。进一步,我们想测试联觉者在多模态情况下对干扰物过滤能力的增强。因此,向参与者展示动物和物体的线条图,要么是一致的(所呈现的图片的典型声音,例如鸟的照片和啾啾声),要么是不一致的(鸟的照片和枪声),要么是没有同时的听觉刺激。14名联觉者(听觉-视觉联觉者和文字-颜色联觉者)和13名对照者参加了这项研究。我们发现联觉者和对照组在事件相关电位上存在差异,这表明在竞争情境下,联觉者对双峰刺激的多感觉处理发生了改变。这些差异在大脑额叶部位尤为明显。组(联觉者vs.对照组)和刺激(单峰视觉vs.同峰多峰)之间的相互作用效应无法检测到。因此,我们得出结论,在联觉者和控制者中,多感觉加工的工作原理是相似的,只有在多感觉竞争情况下的整合过程在联觉者中才会发生改变。
{"title":"Multisensory processes in the synaesthetic brain — An event-related potential study in multisensory competition situations","authors":"J. Neufeld, C. Sinke, Daniel Wiswede, H. Emrich, S. Bleich, G. Szycik","doi":"10.1163/187847612X647333","DOIUrl":"https://doi.org/10.1163/187847612X647333","url":null,"abstract":"In synaesthesia certain external stimuli (e.g., music) trigger automatically internally generated sensations (e.g., colour). Results of behavioural investigations indicate that multisensory processing works differently in synaesthetes. However, the reasons for these differences and the underlying neural correlates remain unclear. The aim of the current study was to investigate if synaesthetes show differences in electrophysiological components of multimodal processing. Further we wanted to test synaesthetes for an enhanced distractor filtering ability in multimodal situations. Therefore, line drawings of animals and objects were presented to participants, either with congruent (typical sound for presented picture, e.g., picture of bird together with chirp), incongruent (picture of bird together with gun shot) or without simultaneous auditory stimulation. 14 synaesthetes (auditory–visual and grapheme-colour synaesthetes) and 13 controls participated in the study. We found differences in the event-related potentials between synaesthetes and controls, indicating an altered multisensory processing of bimodal stimuli in synaesthetes in competition situations. These differences were especially found over frontal brain sites. An interaction effect between group (synaesthetes vs. controls) and stimulation (unimodal visual vs. congruent multimodal) could not be detected. Therefore we conclude that multisensory processing works in general similar in synaesthetes and controls and that only specifically integration processes in multisensory competition situations are altered in synaesthetes.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"101-101"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Age-related changes in temporal processing of vestibular stimuli 前庭刺激颞加工的年龄相关变化
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647847
Alex K. Malone, N. N. Chang, T. Hullar
Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.
跌倒是老年人致残的主要原因之一。先前的研究表明,跌倒可能与多感官刺激的时间整合变化有关。本研究比较了年轻人和老年人前庭刺激和听觉刺激的时间整合和处理。前庭刺激包括使用旋转椅传递的连续正弦旋转速度,听觉刺激包括通过耳机呈现的5 ms白噪声(均为0.5 Hz)。同时性被定义为在对侧耳朵感知到听觉刺激的同时,感知到椅子在其最右或最左的轨迹上。用恒刺激法调整听觉刺激的时间偏移,使听觉刺激引导或滞后真同时性。15名年龄较小的(21-27岁)和12名年龄较大的(63-89岁)健康受试者使用两种选择任务进行测试,以确定他们在什么时候同时感受到两种刺激。年轻受试者的平均时间结合窗为334±37 ms(平均±SEM),主观同时性平均点为83±15 ms。老年受试者的平均TBW为556±36 ms,主观同时性平均点为158±27。这两种差异都很显著,表明老年受试者比年轻受试者有更大的时间范围来整合前庭和听觉刺激。这些结果在重新测试时是一致的,而不是由于前庭感知阈值的差异。
{"title":"Age-related changes in temporal processing of vestibular stimuli","authors":"Alex K. Malone, N. N. Chang, T. Hullar","doi":"10.1163/187847612X647847","DOIUrl":"https://doi.org/10.1163/187847612X647847","url":null,"abstract":"Falls are one of the leading causes of disability in the elderly. Previous research has shown that falls may be related to changes in the temporal integration of multisensory stimuli. This study compared the temporal integration and processing of a vestibular and auditory stimulus in younger and older subjects. The vestibular stimulus consisted of a continuous sinusoidal rotational velocity delivered using a rotational chair and the auditory stimulus consisted of 5 ms of white noise presented dichotically through headphones (both at 0.5 Hz). Simultaneity was defined as perceiving the chair being at its furthest rightward or leftward trajectory at the same moment as the auditory stimulus was perceived in the contralateral ear. The temporal offset of the auditory stimulus was adjusted using a method of constant stimuli so that the auditory stimulus either led or lagged true simultaneity. 15 younger (ages 21–27) and 12 older (ages 63–89) healthy subjects were tested using a two alternative forced choice task to determine at what times they perceived the two stimuli as simultaneous. Younger subjects had a mean temporal binding window of 334 ± 37 ms (mean ± SEM) and a mean point of subjective simultaneity of 83 ± 15 ms. Older subjects had a mean TBW of 556 ± 36 ms and a mean point of subjective simultaneity of 158 ± 27. Both differences were significant indicating that older subjects have a wider temporal range over which they integrate vestibular and auditory stimuli than younger subjects. These findings were consistent upon retesting and were not due to differences in vestibular perception thresholds.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"153-153"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647847","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluative similarity hypothesis of crossmodal correspondences: A developmental view 跨模式对应的评价相似性假设:一个发展的观点
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647603
D. Janković
Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.
尽管这种现象背后的机制尚未完全确立,但跨模式对应已被广泛证明。根据评价相似性假说,跨模态对应受到来自不同感觉模态刺激的评价(情感)相似性的影响(Jankovic, 2010, Journal of Vision 10(7), 859)。从这个角度来看,在不同感觉模式的刺激中发现相似的评价信息有助于跨模式对应和多感觉整合。本研究的目的是探讨儿童跨模式通信的评价相似性假设。在实验1中,两组参与者(9岁和13岁)被要求在呈现的听觉刺激(15个长声音片段)和抽象的视觉模式之间进行明确的匹配。在实验2中,同样的被试对抽象的视觉模式和听觉刺激的判断是基于测量情感效价和唤醒的评价属性集。结果表明,两个年龄组的跨模对应关系主要受视觉和听觉刺激评价相似性的影响。匹配频率最高的是效价一致和唤醒一致的视觉和听觉刺激,其次是效价一致的刺激,最后是唤醒一致的刺激。评价不一致的刺激表现出低的跨模关联,尤其是在老年人中。
{"title":"Evaluative similarity hypothesis of crossmodal correspondences: A developmental view","authors":"D. Janković","doi":"10.1163/187847612X647603","DOIUrl":"https://doi.org/10.1163/187847612X647603","url":null,"abstract":"Crossmodal correspondences have been widely demonstrated, although mechanisms that stand behind the phenomenon have not been fully established yet. According to the Evaluative similarity hypothesis crossmodal correspondences are influenced by evaluative (affective) similarity of stimuli from different sensory modalities (Jankovic, 2010, Journal of Vision 10(7), 859). From this view, detection of similar evaluative information in stimulation from different sensory modalities facilitates crossmodal correspondences and multisensory integration. The aim of this study was to explore the evaluative similarity hypothesis of crossmodal correspondences in children. In Experiment 1 two groups of participants (nine- and thirteen-year-olds) were asked to make explicit matches between presented auditory stimuli (1 s long sound clips) and abstract visual patterns. In Experiment 2 the same participants judged abstract visual patterns and auditory stimuli on the set of evaluative attributes measuring affective valence and arousal. The results showed that crossmodal correspondences are mostly influenced by evaluative similarity of visual and auditory stimuli in both age groups. The most frequently matched were visual and auditory stimuli congruent in both valence and arousal, followed by stimuli congruent in valence, and finally stimuli congruent in arousal. Evaluatively incongruent stimuli demonstrated low crossmodal associations especially in older group.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"13 1","pages":"127-127"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An invisible speaker can facilitate auditory speech perception 隐形说话者可以促进听觉言语感知
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647801
M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki
Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.
当口腔受到照顾时,观察嘴唇的运动有助于听觉语言感知。然而,最近的证据表明,视觉注意和意识是由不同的机制介导的。我们研究了被视觉意识抑制的嘴唇运动是否能促进语言感知。我们使用了一个单词分类任务,在这个任务中,参与者听着口语单词,并尽可能快速准确地确定每个单词是否代表一种工具。当参与者听单词时,他们观看了一个视觉显示,显示了说话者同步说出所听单词的视频剪辑,或者同一说话者发音不同的单词。关键的是,说话者的脸要么是可见的(有意识的试验),要么是通过持续的闪光抑制来抑制意识。有意识和抑制试验随机混合。第二个探针探测任务确保参与者关注嘴巴区域,而不管脸是可见的还是被抑制的。在有意识的实验中,同步嘴唇运动对工具目标的反应并不比非同步嘴唇运动更快,这可能是因为50%的实验中视觉信息与听觉信息不一致。然而,在抑制试验中,同步唇运动对工具目标的反应明显快于非同步唇运动。这些结果表明,即使随机动态面具使人脸不可见,嘴唇运动也会被视觉系统以足够高的时间分辨率处理,以促进语音感知。
{"title":"An invisible speaker can facilitate auditory speech perception","authors":"M. Grabowecky, Emmanuel Guzman-Martinez, L. Ortega, Satoru Suzuki","doi":"10.1163/187847612X647801","DOIUrl":"https://doi.org/10.1163/187847612X647801","url":null,"abstract":"Watching moving lips facilitates auditory speech perception when the mouth is attended. However, recent evidence suggests that visual attention and awareness are mediated by separate mechanisms. We investigated whether lip movements suppressed from visual awareness can facilitate speech perception. We used a word categorization task in which participants listened to spoken words and determined as quickly and accurately as possible whether or not each word named a tool. While participants listened to the words they watched a visual display that presented a video clip of the speaker synchronously speaking the auditorily presented words, or the same speaker articulating different words. Critically, the speaker’s face was either visible (the aware trials), or suppressed from awareness using continuous flash suppression. Aware and suppressed trials were randomly intermixed. A secondary probe-detection task ensured that participants attended to the mouth region regardless of whether the face was visible or suppressed. On the aware trials responses to the tool targets were no faster with the synchronous than asynchronous lip movements, perhaps because the visual information was inconsistent with the auditory information on 50% of the trials. However, on the suppressed trials responses to the tool targets were significantly faster with the synchronous than asynchronous lip movements. These results demonstrate that even when a random dynamic mask renders a face invisible, lip movements are processed by the visual system with sufficiently high temporal resolution to facilitate speech perception.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"148-148"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Updating expectencies about audiovisual associations in speech 语音中视听关联期望的更新
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647946
Tim Paris, Jeesun Kim, C. Davis
The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.
多感觉信息的处理依赖于感觉线索之间的习得关联。在说话的情况下,嘴唇的动作和随后发出的声音之间有一种后天习得的联系。也就是说,特定的嘴唇和嘴部运动可靠地发出特定的声音。脑电图和脑磁图研究已经调查了这种“一致的”AV关联和其他“不一致的”关联之间的差异,通常报告了声音发作后350毫秒的ERP差异。使用256个有源电极脑电图系统,我们测试了这种“一致性效应”是否会在大多数试验具有改变的视听关联(听觉言语与不匹配的视觉嘴唇运动配对)的情况下减少。参与者在两个阶段被呈现刺激:在一个阶段只有15%是不一致的试验;在另一个阶段,85%是不一致的试验。我们发现了一致性效应,在350 ms和500 ms之间显示了一致性和不一致性语音之间的ERP差异。重要的是,在大多数不一致的试验中,这种影响被降低了。这种一致性效应的减弱表明,AV语音的处理方式取决于它所处的语境。此外,这一结果表明,接触新的感官关系会导致对听觉和视觉语言线索之间关系的更新预期。
{"title":"Updating expectencies about audiovisual associations in speech","authors":"Tim Paris, Jeesun Kim, C. Davis","doi":"10.1163/187847612X647946","DOIUrl":"https://doi.org/10.1163/187847612X647946","url":null,"abstract":"The processing of multisensory information depends on the learned association between sensory cues. In the case of speech there is a well-learned association between the movements of the lips and the subsequent sound. That is, particular lip and mouth movements reliably lead to a specific sound. EEG and MEG studies that have investigated the differences between this ‘congruent’ AV association and other ‘incongruent’ associations have commonly reported ERP differences from 350 ms after sound onset. Using a 256 active electrode EEG system, we tested whether this ‘congruency effect’ would be reduced in the context where most of the trials had an altered audiovisual association (auditory speech paired with mismatched visual lip movements). Participants were presented stimuli over 2 sessions: in one session only 15% were incongruent trials; in the other session, 85% were incongruent trials. We found a congruency effect, showing differences in ERP between congruent and incongruent speech between 350 and 500 ms. Importantly, this effect was reduced within the context of mostly incongruent trials. This reduction in the congruency effect indicates that the way in which AV speech is processed depends on the context it is viewed in. Furthermore, this result suggests that exposure to novel sensory relationships leads to updated expectations regarding the relationship between auditory and visual speech cues.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"120 1","pages":"164-164"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647946","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Electrophysiological correlates of tactile and visual perception during goal-directed movement 目标导向运动中触觉和视觉感知的电生理关联
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648008
G. Juravle, T. Heed, C. Spence, B. Roeder
Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.
到达我们感觉感受器的触觉信息在目标导向运动的不同时间阶段被不同地处理。我们利用事件相关电位(ERPs)研究了运动过程中触觉信息加工的神经元关联。参与者对摆在他们面前桌子中央的一个物体进行了目标导向的伸手。触觉和视觉刺激分别在运动的不同阶段(即准备、执行和运动后)进行。这些刺激被独立地传递给运动的手或静止的手。在控制条件下,参与者只进行运动,而遗漏(仅运动)erp被记录下来。参与者被告知忽略任何感官事件的存在或不存在,只专注于动作的执行。结果显示,触觉刺激后80 ~ 200 ms和视觉刺激后100 ~ 250 ms之间的erp增强。这些调节在目标导向运动的执行阶段最为显著,它们是基于效应的(即,在移动的手上呈现的刺激明显更负性),并且与模态无关(即,在触觉和视觉刺激下观察到类似的ERP增强)。在运动执行阶段对感觉信息的强化处理表明,传入的感觉信息可能用于对当前运动计划的潜在调整。此外,这些结果表明注意机制和感觉运动系统之间存在密切的相互作用。
{"title":"Electrophysiological correlates of tactile and visual perception during goal-directed movement","authors":"G. Juravle, T. Heed, C. Spence, B. Roeder","doi":"10.1163/187847612X648008","DOIUrl":"https://doi.org/10.1163/187847612X648008","url":null,"abstract":"Tactile information arriving at our sensory receptors is differentially processed over the various temporal phases of goal-directed movements. By using event-related potentials (ERPs), we investigated the neuronal correlates of tactile information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimuli were presented in separate trials during the different phases of the movement (i.e., preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or the resting hand. In a control condition, the participants only performed the movement, while omission (movement-only) ERPs were recorded. Participants were told to ignore the presence or absence of any sensory events and solely concentrate on the execution of the movement. The results highlighted enhanced ERPs between 80 and 200 ms after tactile stimulation, and between 100 and 250 ms after visual stimulation. These modulations were greatest over the execution phase of the goal-directed movement, they were effector-based (i.e., significantly more negative for stimuli presented at the moving hand), and modality-independent (i.e., similar ERP enhancements were observed for both tactile and visual stimuli). The enhanced processing of sensory information over the execution phase of the movement suggests that incoming sensory information may be used for a potential adjustment of the current motor plan. Moreover, these results indicate a tight interaction between attentional mechanisms and the sensorimotor system.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"170-170"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal disparity effects on audiovisual integration in low vision individuals 时间视差对低视力个体视听整合的影响
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648044
Stefano Targher, Valeria Occelli, M. Zampini
Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.
我们最近的研究结果表明,当同时呈现视听对时,声音可以改善低视力个体的视觉检测。本研究旨在调查我们之前报道的视听增强效应的可能的时间方面。低视力的参与者被要求在不同的soa中检测单独呈现或与听觉刺激一起呈现的视觉刺激(是/否任务)的存在。在第一个实验中,当声音始终引导视觉刺激时,即使视觉刺激延迟400 ms,视觉检测也有显著的增强。然而,在第二个实验中,当声音随机领先或滞后于视觉刺激时,视觉检测的提高程度有所降低。只有当视听刺激同步时,才能发现显著的增强。综上所述,目前的研究结果似乎表明,不同模式之间的高度关联可能会调节低视力个体的视听互动。
{"title":"Temporal disparity effects on audiovisual integration in low vision individuals","authors":"Stefano Targher, Valeria Occelli, M. Zampini","doi":"10.1163/187847612X648044","DOIUrl":"https://doi.org/10.1163/187847612X648044","url":null,"abstract":"Our recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual pairs are presented simultaneously. The present study purports to investigate possible temporal aspects of the audiovisual enhancement effect that we have previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) either presented in isolation or together with an auditory stimulus at different SOAs. In the first experiment, when the sound was always leading the visual stimuli, there was a significant visual detection enhancement even when the visual stimulus was temporally delayed by 400 ms. However, the visual detection improvement was reduced in the second experiment when the sound could randomly lead or lag the visual stimulus. A significant enhancement was found only when the audiovisual stimuli were synchronized. Taken together, the results of the present study seem to suggest that high-level associations between modalities might modulate audiovisual interactions in low vision individuals.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"175-175"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1