首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Examining tactile spatial remapping using transcranial magnetic stimulation 利用经颅磁刺激检查触觉空间重映射
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647757
Jared Medina, S. Khurshid, Roy H. Hamilton, H. Coslett
Previous research has provided evidence for two stages of tactile processing (e.g., Azanon and Soto-Faraco, 2008; Groh and Sparks, 1996). First, tactile stimuli are represented in a somatotopic representation that does not take into account body position in space, followed by a representation of body position in external space (body posture representation, see Medina and Coslett, 2010). In order to explore potential functional and neural dissociations between these two stages of processing, we presented eight participants with TMS before and after a tactile temporal order judgment (TOJ) task (see Yamamoto and Kitazawa, 2001). Participants were tested with their hands crossed and uncrossed before and after 20 min of 1 Hz repetitive TMS (rTMS). Stimulation occurred at the left anterior intraparietal sulcus (aIPS, somatotopic representation) or left Brodmann Area 5 (BA5, body posture) during two separate sessions. We predicted that left aIPS TMS would affect a somatotopic representation of the body, and would disrupt performance in both the uncrossed and crossed conditions. However, we predicted that TMS of body posture areas (BA5) would disrupt mechanisms for updating limb position with the hands crossed, resulting in a paradoxical improvement in performance after TMS. Using thresholds derived from adaptive staircase procedures, we found that left aIPS TMS disrupted performance in the uncrossed condition. However, left BA5 TMS resulted in a significant improvement in performance with the hands crossed. We discuss these results with reference to potential dissociations of the traditional body schema.
先前的研究已经为触觉加工的两个阶段提供了证据(例如,Azanon和Soto-Faraco, 2008;Groh and Sparks, 1996)。首先,触觉刺激在不考虑身体在空间中的位置的体位表征中被表征,其次是身体在外部空间中的位置表征(身体姿势表征,见Medina和Coslett, 2010)。为了探索这两个加工阶段之间潜在的功能和神经分离,我们在触觉时间顺序判断(TOJ)任务之前和之后向8名参与者提供了TMS(见Yamamoto和Kitazawa, 2001)。在接受1赫兹重复性经颅磁刺激(rTMS) 20分钟之前和之后,参与者的双手交叉和不交叉进行了测试。刺激发生在左侧顶叶前沟(aIPS,体位表征)或左侧Brodmann区5 (BA5,身体姿势)。我们预测左aIPS经颅磁刺激会影响身体的体位表征,并且会破坏未交叉和交叉条件下的表现。然而,我们预测经颅磁刺激身体姿势区(BA5)会破坏双手交叉时肢体位置更新的机制,导致经颅磁刺激后表现的矛盾改善。使用自适应阶梯程序的阈值,我们发现左aIPS TMS在未交叉条件下破坏了性能。然而,在双手交叉时,左BA5 TMS显著改善了表现。我们讨论这些结果,参考潜在的解离传统的身体图式。
{"title":"Examining tactile spatial remapping using transcranial magnetic stimulation","authors":"Jared Medina, S. Khurshid, Roy H. Hamilton, H. Coslett","doi":"10.1163/187847612X647757","DOIUrl":"https://doi.org/10.1163/187847612X647757","url":null,"abstract":"Previous research has provided evidence for two stages of tactile processing (e.g., Azanon and Soto-Faraco, 2008; Groh and Sparks, 1996). First, tactile stimuli are represented in a somatotopic representation that does not take into account body position in space, followed by a representation of body position in external space (body posture representation, see Medina and Coslett, 2010). In order to explore potential functional and neural dissociations between these two stages of processing, we presented eight participants with TMS before and after a tactile temporal order judgment (TOJ) task (see Yamamoto and Kitazawa, 2001). Participants were tested with their hands crossed and uncrossed before and after 20 min of 1 Hz repetitive TMS (rTMS). Stimulation occurred at the left anterior intraparietal sulcus (aIPS, somatotopic representation) or left Brodmann Area 5 (BA5, body posture) during two separate sessions. We predicted that left aIPS TMS would affect a somatotopic representation of the body, and would disrupt performance in both the uncrossed and crossed conditions. However, we predicted that TMS of body posture areas (BA5) would disrupt mechanisms for updating limb position with the hands crossed, resulting in a paradoxical improvement in performance after TMS. Using thresholds derived from adaptive staircase procedures, we found that left aIPS TMS disrupted performance in the uncrossed condition. However, left BA5 TMS resulted in a significant improvement in performance with the hands crossed. We discuss these results with reference to potential dissociations of the traditional body schema.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"143-143"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensorimotor temporal recalibration within and across limbs 四肢内和四肢间的感觉运动时间重新校准
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647694
K. Yarrow, Ingvild Sverdrup-Stueland, Derek H. Arnold
Repeated presentation of artificially induced delays between actions and events leads to shifts in participants’ subjective simultaneity towards the adapted lag. This sensorimotor temporal recalibration generalises across sensory modalities, presumably via a shift in the motor component. Here we examined two overlapping questions regarding (1) the level of representation of temporal recalibration (by testing whether it also generalises across limbs) and (2) the neural underpinning of the shift in the motor component (by comparing adaption magnitude in the foot relative to the hand). An adaption-test paradigm was used, with hand or foot adaptation, and same-limb and cross-limb test phases that used a synchrony judgement task. By demonstrating that temporal recalibration occurs in the foot, we confirmed that it is a robust motor phenomenon. Shifts in the distribution of participants’ synchrony responses were quantified using a detection-theoretic model of the SJ task, where a shift of both boundaries together gives a stronger indication that the effect is not simply a result of decision bias. The results showed a significant shift in both boundaries in the same-limb conditions, whereas there was only a shift of the higher boundary in the cross-limb conditions. These two patterns most likely reflect a genuine shift in neural timing, and a criterion shift, respectively.
在行动和事件之间反复出现人为的延迟,导致参与者的主观同时性向适应滞后转变。这种感觉-运动-时间的重新校准可能是通过运动部分的转移而在感觉模式中普遍化的。在这里,我们研究了两个重叠的问题,即:(1)时间重新校准的表征水平(通过测试它是否也适用于四肢)和(2)运动成分转移的神经基础(通过比较足部相对于手的适应幅度)。采用适应测试范式,包括手或脚的适应,以及采用同步判断任务的同肢和跨肢测试阶段。通过证明时间重新校准发生在足部,我们证实这是一个强大的运动现象。参与者的同步反应分布的变化使用SJ任务的检测理论模型进行了量化,其中两个边界的变化一起给出了一个更强的迹象,表明这种影响不仅仅是决策偏差的结果。结果表明,在同肢条件下,两个边界都有明显的移动,而在跨肢条件下,只有较高的边界有移动。这两种模式很可能分别反映了神经时序的真正转变和标准的转变。
{"title":"Sensorimotor temporal recalibration within and across limbs","authors":"K. Yarrow, Ingvild Sverdrup-Stueland, Derek H. Arnold","doi":"10.1163/187847612X647694","DOIUrl":"https://doi.org/10.1163/187847612X647694","url":null,"abstract":"Repeated presentation of artificially induced delays between actions and events leads to shifts in participants’ subjective simultaneity towards the adapted lag. This sensorimotor temporal recalibration generalises across sensory modalities, presumably via a shift in the motor component. Here we examined two overlapping questions regarding (1) the level of representation of temporal recalibration (by testing whether it also generalises across limbs) and (2) the neural underpinning of the shift in the motor component (by comparing adaption magnitude in the foot relative to the hand). An adaption-test paradigm was used, with hand or foot adaptation, and same-limb and cross-limb test phases that used a synchrony judgement task. By demonstrating that temporal recalibration occurs in the foot, we confirmed that it is a robust motor phenomenon. Shifts in the distribution of participants’ synchrony responses were quantified using a detection-theoretic model of the SJ task, where a shift of both boundaries together gives a stronger indication that the effect is not simply a result of decision bias. The results showed a significant shift in both boundaries in the same-limb conditions, whereas there was only a shift of the higher boundary in the cross-limb conditions. These two patterns most likely reflect a genuine shift in neural timing, and a criterion shift, respectively.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"38 1","pages":"137-137"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual benefit in bimodal training with highly distorted speech sound 语音高度扭曲的双峰训练对视觉的好处
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647883
Mika Sato, T. Kawase, S. Sakamoto, Yôiti Suzuki, Toshimitsu Kobayashi
Artificial auditory devices such as cochlear implants (CIs) and auditory brainstem implants (ABIs) have become standard means to manage profound sensorineural hearing loss. However, because of their structural limitations compared to the cochlea and the cochlear nucleus, the generated auditory sensations are still imperfect. Recipients need postoperative auditory rehabilitation. To improve these rehabilitation programs, this study evaluated the effects of bimodal (audio–visual) training under seven experimental conditions of distorted speech sound, named noise-vocoded speech sound (NVSS), which is similarly processed with a speech processor of CI/ABI. Word intelligibilities under the seven conditions of two-band noise-vocoded speech were measured for auditory (A), visual (V) and auditory–visual (AV) modalities after a few hours of bimodal (AV) training. The experiment was performed with 56 subjects with normal hearing. Performance of A and AV word recognition was significantly different under the seven auditory conditions. The V word intelligibility was not influenced by the condition of combined auditory cues. However, V word intelligibility was correlated with AV word recognition under all frequency conditions. Correlation between A and AV word intelligibilities was ambiguous. These findings suggest the importance of visual cues in AV speech perception under extremely degraded auditory conditions, and underscore the importance of the possible effectiveness of bimodal audio–visual training in postoperative rehabilitation for patients with postlingual deafness who have undergone artificial auditory device implantation.
人工听觉装置,如人工耳蜗(CIs)和听觉脑干植入物(ABIs)已成为治疗重度感音神经性听力损失的标准手段。然而,与耳蜗和耳蜗核相比,由于其结构上的限制,所产生的听觉感觉仍然不完善。受术者需要术后听觉康复。为了改进这些康复方案,本研究在七种实验条件下评估了双峰(视听)训练的效果,这些条件被称为噪声-语音编码语音(NVSS),这些语音被称为CI/ABI语音处理器。经过几个小时的双峰(AV)训练,测量了七种双波段噪声编码语音条件下的听觉(A)、视觉(V)和听觉-视觉(AV)模式的词语可理解度。实验对象为听力正常的56人。7种听觉条件下,A字和AV字的识别能力存在显著差异。V字的可理解性不受复合听觉提示条件的影响。然而,在所有频率条件下,V字的可理解度与AV字的识别度相关。A和AV词的可理解性之间的相关性是模糊的。这些研究结果表明,在听觉极度退化的条件下,视觉线索在AV语音感知中的重要性,并强调了双峰视听训练在接受人工听觉装置植入的语后耳聋患者术后康复中的重要性。
{"title":"Visual benefit in bimodal training with highly distorted speech sound","authors":"Mika Sato, T. Kawase, S. Sakamoto, Yôiti Suzuki, Toshimitsu Kobayashi","doi":"10.1163/187847612X647883","DOIUrl":"https://doi.org/10.1163/187847612X647883","url":null,"abstract":"Artificial auditory devices such as cochlear implants (CIs) and auditory brainstem implants (ABIs) have become standard means to manage profound sensorineural hearing loss. However, because of their structural limitations compared to the cochlea and the cochlear nucleus, the generated auditory sensations are still imperfect. Recipients need postoperative auditory rehabilitation. To improve these rehabilitation programs, this study evaluated the effects of bimodal (audio–visual) training under seven experimental conditions of distorted speech sound, named noise-vocoded speech sound (NVSS), which is similarly processed with a speech processor of CI/ABI. Word intelligibilities under the seven conditions of two-band noise-vocoded speech were measured for auditory (A), visual (V) and auditory–visual (AV) modalities after a few hours of bimodal (AV) training. The experiment was performed with 56 subjects with normal hearing. Performance of A and AV word recognition was significantly different under the seven auditory conditions. The V word intelligibility was not influenced by the condition of combined auditory cues. However, V word intelligibility was correlated with AV word recognition under all frequency conditions. Correlation between A and AV word intelligibilities was ambiguous. These findings suggest the importance of visual cues in AV speech perception under extremely degraded auditory conditions, and underscore the importance of the possible effectiveness of bimodal audio–visual training in postoperative rehabilitation for patients with postlingual deafness who have undergone artificial auditory device implantation.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"68 1","pages":"157-157"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647883","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ERP study of audiovisual simultaneity perception 视听同时性知觉的ERP研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647900
M. Binder
The aim of this study was to examine relation between conscious perception of temporal relation between the elements of an audiovisual pair and the dynamics of accompanying neural activity. This was done by using a simultaneity judgment task and EEG event-related potentials (ERP). During Experiment 1 the pairs of 10 ms white-noise bursts and flashes were used. On presenting each pair subjects pressed one of two buttons to indicate their synchrony. Values of stimulus onset asynchrony (SOA) were based on individual estimates of simultaneity thresholds (50∕50 probability of either response). They were estimated prior to EEG measurement using interleaved staircase involving both sound-first and flash-first stimulus pairs. Experiment 2 had the identical setup, except subjects indicated if audio–visual pair began simultaneously (termination was synchronous). ERP waveforms were time-locked to the second stimulus in the pair. Effects of synchrony perception were studied by comparing ERPs in trials that were judged as simultaneous and non-simultaneous. Subjects were divided into two subgroups with similar SOA values. In both experiments at about 200 ms after the second stimulus onset a stronger ERP wave positivity for trials judged as non-simultaneous was observed in parieto-central sites. This effect was observed for both sound-first and video-first pairs and for both SOA subgroups. The results demonstrate that the perception of temporal relations between multimodal stimuli with identical physical parameters is reflected in localized ERP differences. Given their localization in the posterior parietal regions, these differences may be viewed as correlates of conscious perception of temporal integration vs. separation of audiovisual stimuli.
本研究的目的是检查意识知觉之间的时间关系的元素之间的视听对和伴随的神经活动的动态关系。这是通过同时性判断任务和脑电图事件相关电位(ERP)来完成的。在实验1中,使用了10毫秒白噪声脉冲和闪烁对。在展示每一对时,受试者按下两个按钮中的一个来表示他们的同步。刺激开始异步(SOA)的值基于个人对同时性阈值的估计(任一反应的概率为50∕50)。在脑电图测量之前,使用包括声音优先和闪光优先刺激对的交错楼梯来估计它们。实验2的设置与实验2相同,但受试者表示视听对是否同时开始(终止是同步的)。ERP波形被时间锁定在第二个刺激上。通过比较被判定为同时和非同时的试验中的erp来研究同步知觉的影响。研究对象被分成两个具有相似SOA值的子组。在两个实验中,在第二次刺激开始后约200 ms时,在顶-中心部位观察到较强的非同步试验的ERP波正性。在声音优先和视频优先对以及两个SOA子组中都观察到了这种效果。结果表明,具有相同物理参数的多模态刺激之间的时间关系知觉反映在局部ERP差异上。鉴于它们位于后顶叶区域,这些差异可能被视为时间整合与视听刺激分离的有意识感知的相关性。
{"title":"An ERP study of audiovisual simultaneity perception","authors":"M. Binder","doi":"10.1163/187847612X647900","DOIUrl":"https://doi.org/10.1163/187847612X647900","url":null,"abstract":"The aim of this study was to examine relation between conscious perception of temporal relation between the elements of an audiovisual pair and the dynamics of accompanying neural activity. This was done by using a simultaneity judgment task and EEG event-related potentials (ERP). During Experiment 1 the pairs of 10 ms white-noise bursts and flashes were used. On presenting each pair subjects pressed one of two buttons to indicate their synchrony. Values of stimulus onset asynchrony (SOA) were based on individual estimates of simultaneity thresholds (50∕50 probability of either response). They were estimated prior to EEG measurement using interleaved staircase involving both sound-first and flash-first stimulus pairs. Experiment 2 had the identical setup, except subjects indicated if audio–visual pair began simultaneously (termination was synchronous). ERP waveforms were time-locked to the second stimulus in the pair. Effects of synchrony perception were studied by comparing ERPs in trials that were judged as simultaneous and non-simultaneous. Subjects were divided into two subgroups with similar SOA values. In both experiments at about 200 ms after the second stimulus onset a stronger ERP wave positivity for trials judged as non-simultaneous was observed in parieto-central sites. This effect was observed for both sound-first and video-first pairs and for both SOA subgroups. The results demonstrate that the perception of temporal relations between multimodal stimuli with identical physical parameters is reflected in localized ERP differences. Given their localization in the posterior parietal regions, these differences may be viewed as correlates of conscious perception of temporal integration vs. separation of audiovisual stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"159-159"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647900","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The size of the ventriloquist effect is modulated by emotional valence 腹语效应的大小受情绪效价的调节
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647964
Mario Maiworm, Marina Bellantoni, C. Spence, B. Roeder
It is currently unknown to what extent the integration of inputs from different modalities are subject to the influence of attention, emotion, and/or motivation. The ventriloquist effect is widely assumed to be an automatic, crossmodal phenomenon, normally shifting the perceived location of an auditory stimulus toward a concurrently-presented visual stimulus. The present study examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent light flashes (both of which were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduced ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing but non-emotional manipulation. These results suggest that the emotional system is capable of influencing crossmodal binding processes which have heretofore been considered as being automatic.
目前尚不清楚来自不同模式的输入的整合在多大程度上受到注意力、情绪和/或动机的影响。腹语效应被广泛认为是一种自动的、跨模式的现象,通常将听觉刺激的感知位置转移到同时呈现的视觉刺激上。本研究考察了在腹语实验之前出现的威胁性听觉刺激是否会影响视听结合,正如腹语效应的大小所表明的那样。以恐惧的声音说出的音节从8个扬声器中的一个播放,而以中性的声音说出的音节从其他7个扬声器播放。随后,参与者必须定位纯音,同时试图忽略同时发生的闪光(这两种闪光都是情感中性的)。观察到可靠的腹语效果。与对照组相比,情绪刺激操作导致两个半脑区的腹语效应减弱,对照组暴露于类似的注意力捕获但非情绪操作。这些结果表明,情绪系统能够影响迄今为止被认为是自动的跨模结合过程。
{"title":"The size of the ventriloquist effect is modulated by emotional valence","authors":"Mario Maiworm, Marina Bellantoni, C. Spence, B. Roeder","doi":"10.1163/187847612X647964","DOIUrl":"https://doi.org/10.1163/187847612X647964","url":null,"abstract":"It is currently unknown to what extent the integration of inputs from different modalities are subject to the influence of attention, emotion, and/or motivation. The ventriloquist effect is widely assumed to be an automatic, crossmodal phenomenon, normally shifting the perceived location of an auditory stimulus toward a concurrently-presented visual stimulus. The present study examined whether audiovisual binding, as indicated by the magnitude of the ventriloquist effect, is influenced by threatening auditory stimuli presented prior to the ventriloquist experiment. Syllables spoken in a fearful voice were presented from one of eight loudspeakers while syllables spoken in a neutral voice were presented from the other seven locations. Subsequently, participants had to localize pure tones while trying to ignore concurrent light flashes (both of which were emotionally neutral). A reliable ventriloquist effect was observed. The emotional stimulus manipulation resulted in a reduced ventriloquist effect in both hemifields, as compared to a control group exposed to a similar attention-capturing but non-emotional manipulation. These results suggest that the emotional system is capable of influencing crossmodal binding processes which have heretofore been considered as being automatic.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"166-166"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redundancy gains in audio–visual search 在视听搜索中获得冗余
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648116
Tifanie Bouchara, B. Katz
This study concerns stimuli-driven perceptual processes involved in target search among concurrent distractors with a focus on comparing auditory, visual, and audio–visual search tasks. Previous works, concerning unimodal search tasks, highlighted different preattentive features that can enhance target saliency, making it ‘pop-out’, e.g., a visually sharp target among blurred distractors. A cue from another modality can also help direct attention towards the target. Our study investigates a new kind of search task, where stimuli consist of audio–visual objects presented using both audio and visual modalities simultaneously. Redundancy effects are evaluated, first from the combination of audio and visual modalities, second from the combination of each unimodal cue in such a bimodal search task. A perceptual experiment was performed where the task was to identify an audio–visual object from a set of six competing stimuli. We employed static visual blur and developed an auditory blur analogue to cue the search. Results show that both visual and auditory blurs render distractors less prominent and automatically attracts attention toward a sharp target. The combination of both unimodal blurs, i.e., audio–visual blur, also proved to be an efficient cue to facilitate bimodal search task. Results also showed that search tasks were performed faster in redundant bimodal conditions than in unimodal ones. That gain was due to redundant target effect only without any redundancy gain due to the cue combination, as solely cueing the visual component was sufficient, with no improvement found by the addition of the redundant audio cue in bimodal search tasks.
本研究探讨了在并发干扰物中目标搜索的刺激驱动知觉过程,并对听觉、视觉和视听搜索任务进行了比较。先前关于单模搜索任务的研究强调了不同的预先注意特征,这些特征可以增强目标的显著性,使其“弹出”,例如,在模糊的干扰物中成为视觉上清晰的目标。另一种情态的提示也有助于将注意力引向目标。我们的研究探讨了一种新的搜索任务,其中刺激由视听对象组成,同时使用视听方式呈现。对冗余效果进行评估,首先从音频和视觉模式的组合中,其次从这种双峰搜索任务中每个单峰线索的组合中。在一项知觉实验中,任务是从一组六个相互竞争的刺激中识别一个视听对象。我们采用了静态视觉模糊,并开发了听觉模糊模拟来提示搜索。结果表明,视觉和听觉模糊都使干扰物不那么突出,并自动将注意力吸引到一个尖锐的目标上。两种单峰模糊的结合,即视听模糊,也被证明是一种有效的线索,以促进双峰搜索任务。结果还表明,搜索任务在冗余双峰条件下比在单峰条件下执行得更快。这种增益是由于冗余目标效应,而不是由于线索组合而产生的任何冗余增益,因为仅提示视觉成分就足够了,在双峰搜索任务中添加冗余音频线索没有发现任何改进。
{"title":"Redundancy gains in audio–visual search","authors":"Tifanie Bouchara, B. Katz","doi":"10.1163/187847612X648116","DOIUrl":"https://doi.org/10.1163/187847612X648116","url":null,"abstract":"This study concerns stimuli-driven perceptual processes involved in target search among concurrent distractors with a focus on comparing auditory, visual, and audio–visual search tasks. Previous works, concerning unimodal search tasks, highlighted different preattentive features that can enhance target saliency, making it ‘pop-out’, e.g., a visually sharp target among blurred distractors. A cue from another modality can also help direct attention towards the target. Our study investigates a new kind of search task, where stimuli consist of audio–visual objects presented using both audio and visual modalities simultaneously. Redundancy effects are evaluated, first from the combination of audio and visual modalities, second from the combination of each unimodal cue in such a bimodal search task. A perceptual experiment was performed where the task was to identify an audio–visual object from a set of six competing stimuli. We employed static visual blur and developed an auditory blur analogue to cue the search. Results show that both visual and auditory blurs render distractors less prominent and automatically attracts attention toward a sharp target. The combination of both unimodal blurs, i.e., audio–visual blur, also proved to be an efficient cue to facilitate bimodal search task. Results also showed that search tasks were performed faster in redundant bimodal conditions than in unimodal ones. That gain was due to redundant target effect only without any redundancy gain due to the cue combination, as solely cueing the visual component was sufficient, with no improvement found by the addition of the redundant audio cue in bimodal search tasks.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"181-181"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648116","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From observation to enactment: Can dance experience enhance multisensory temporal integration? 从观察到表演:舞蹈体验能增强多感官时间整合吗?
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648170
Helena Sgouramani, Chris Muller, L. V. Noorden, M. Leman, A. Vatakis
We report two experiments aiming to define how experience and stimulus enactment affect multisensory temporal integration for ecologically-valid stimuli. In both experiments, a number of different dance steps were used as audiovisual displays at a range of stimulus onset asynchronies using the method of constant stimuli. Participants were either professional dancers or non-dancers. In Experiment 1, using a simultaneity judgment (SJ) task, we aimed at defining — for the first time — the temporal window of integration (TWI) for dancers and non-dancers and the role of experience in SJ performance. Preliminary results showed that dancers had smaller TWI in comparison to non-dancers for all stimuli tested, with higher complexity (participant rated) dance steps requiring larger auditory leads for both participant groups. In Experiment 2, we adapted a more embodied point of view by examining how enactment of the stimulus modulates the TWIs. Participants were presented with simple audiovisual dance steps that could be synchronous or asynchronous and were asked to synchronize with the audiovisual display by actually performing the step indicated. A motion capture system recorded their performance at a millisecond level of accuracy. Based on the optimal integration hypothesis, we are currently looking at the data in terms of which modality will be dominant, considering that dance is a spatially (visual) and temporally (audio) coordinated action. Any corrective adjustments, accelerations–decelerations, hesitations will be interpreted as indicators of the perception of ambiguity in comparison to their performance at the synchronous condition, thus, for the first time, an implicit SJ response will be measured.
我们报告了两个旨在定义经验和刺激制定如何影响生态有效刺激的多感觉时间整合的实验。在这两个实验中,采用恒定刺激的方法,在一系列刺激开始异步的情况下,使用许多不同的舞蹈步骤作为视听显示。参与者要么是专业舞者,要么不是舞者。在实验1中,我们利用同时性判断(SJ)任务,旨在首次定义舞者和非舞者的时间整合窗口(TWI)以及经验在SJ表现中的作用。初步结果表明,在所有测试的刺激下,舞者的TWI都比非舞者小,舞步的复杂性(参与者评分)更高,两组参与者都需要更大的听觉引导。在实验2中,我们采用了更具体的观点,研究了刺激的制定如何调节twi。研究人员向参与者展示了简单的视听舞蹈步骤,这些步骤可以是同步的,也可以是异步的,他们被要求通过实际表演所指示的步骤来与视听显示同步。一个动作捕捉系统以毫秒级的精度记录了他们的表现。基于最优整合假设,考虑到舞蹈是一种空间(视觉)和时间(音频)协调的行为,我们目前正在研究哪种形态将占主导地位的数据。与同步条件下的表现相比,任何纠正性调整、加速-减速、犹豫都将被解释为模糊感知的指标,因此,隐式SJ反应将首次被测量。
{"title":"From observation to enactment: Can dance experience enhance multisensory temporal integration?","authors":"Helena Sgouramani, Chris Muller, L. V. Noorden, M. Leman, A. Vatakis","doi":"10.1163/187847612X648170","DOIUrl":"https://doi.org/10.1163/187847612X648170","url":null,"abstract":"We report two experiments aiming to define how experience and stimulus enactment affect multisensory temporal integration for ecologically-valid stimuli. In both experiments, a number of different dance steps were used as audiovisual displays at a range of stimulus onset asynchronies using the method of constant stimuli. Participants were either professional dancers or non-dancers. In Experiment 1, using a simultaneity judgment (SJ) task, we aimed at defining — for the first time — the temporal window of integration (TWI) for dancers and non-dancers and the role of experience in SJ performance. Preliminary results showed that dancers had smaller TWI in comparison to non-dancers for all stimuli tested, with higher complexity (participant rated) dance steps requiring larger auditory leads for both participant groups. In Experiment 2, we adapted a more embodied point of view by examining how enactment of the stimulus modulates the TWIs. Participants were presented with simple audiovisual dance steps that could be synchronous or asynchronous and were asked to synchronize with the audiovisual display by actually performing the step indicated. A motion capture system recorded their performance at a millisecond level of accuracy. Based on the optimal integration hypothesis, we are currently looking at the data in terms of which modality will be dominant, considering that dance is a spatially (visual) and temporally (audio) coordinated action. Any corrective adjustments, accelerations–decelerations, hesitations will be interpreted as indicators of the perception of ambiguity in comparison to their performance at the synchronous condition, thus, for the first time, an implicit SJ response will be measured.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"188-188"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648170","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The impact of imagery-evoking category labels on perceived variety 唤起意象的类别标签对感知多样性的影响
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648189
Tamara L. Ansons, Aradhna Krishna, N. Schwarz
Does sensory imagery influence consumers’ perception of variety for a set of products? We tested this possibility across two studies in which participants received one of three alternate coffee menus where all the coffees were the same but the category labels were varied on how imagery-evocative they were. The less evocative labels (i) were more generic in nature (e.g., ‘Sweet’ or ‘Category A’), whereas the more evocative ones related either (ii) to the sensory experience of coffee (e.g., ‘Sweet Chocolate Flavor’ or ‘Smokey-Sweet Charred Dark Roast’) or (iii) to imagery related to where the coffee was grown (e.g., ‘Rich Volcanic Soil’ or ‘Dark Rich Volcanic Soil’). The labels relating to where the coffee was grown was included as a second control to show that merely increasing imagery does not increase perceived variety; it is increasing the sensory imagery relating to the items that does so. As expected, only category labels that evoked sensory imagery increased consumers’ perception of variety, whereas imagining where the coffee was grown did not enhance perception of variety. This finding extends recent research that shows that the type of sensory information included in an ad alters the perceptions of a product (Elder and Krishna, 2010) by illustrating that the inclusion of sensory information can also alter the perceived variety of a set of products. Thus, the inclusion of sensory information can be used flexibly to alter perceptions of both a single product and a set of choice alternatives.
感官意象是否会影响消费者对一组产品的多样性感知?我们在两项研究中测试了这种可能性,在这两项研究中,参与者从三种可供选择的咖啡菜单中选择一种,所有的咖啡都是一样的,但类别标签根据它们唤起意象的程度而不同。较少唤起性的标签(i)在本质上更通用(例如,“甜”或“A类”),而更唤起性的标签要么与(ii)咖啡的感官体验有关(例如,“甜巧克力味”或“烟熏甜焦深烤”)或(iii)与咖啡生长地相关的意象有关(例如,“肥沃的火山土壤”或“深色肥沃的火山土壤”)。与咖啡产地有关的标签被作为第二项对照,以表明仅仅增加图像并不会增加感知到的品种;它增加了与物品相关的感官意象。正如预期的那样,只有唤起感官意象的品类标签才会增加消费者对品种的感知,而想象咖啡的产地并不能增强对品种的感知。这一发现扩展了最近的一项研究,该研究表明,广告中包含的感官信息类型可以改变对产品的感知(Elder和Krishna, 2010),说明了感官信息的包含也可以改变对一组产品的感知多样性。因此,可以灵活地使用感官信息来改变对单一产品和一组选择替代品的看法。
{"title":"The impact of imagery-evoking category labels on perceived variety","authors":"Tamara L. Ansons, Aradhna Krishna, N. Schwarz","doi":"10.1163/187847612X648189","DOIUrl":"https://doi.org/10.1163/187847612X648189","url":null,"abstract":"Does sensory imagery influence consumers’ perception of variety for a set of products? We tested this possibility across two studies in which participants received one of three alternate coffee menus where all the coffees were the same but the category labels were varied on how imagery-evocative they were. The less evocative labels (i) were more generic in nature (e.g., ‘Sweet’ or ‘Category A’), whereas the more evocative ones related either (ii) to the sensory experience of coffee (e.g., ‘Sweet Chocolate Flavor’ or ‘Smokey-Sweet Charred Dark Roast’) or (iii) to imagery related to where the coffee was grown (e.g., ‘Rich Volcanic Soil’ or ‘Dark Rich Volcanic Soil’). The labels relating to where the coffee was grown was included as a second control to show that merely increasing imagery does not increase perceived variety; it is increasing the sensory imagery relating to the items that does so. As expected, only category labels that evoked sensory imagery increased consumers’ perception of variety, whereas imagining where the coffee was grown did not enhance perception of variety. This finding extends recent research that shows that the type of sensory information included in an ad alters the perceptions of a product (Elder and Krishna, 2010) by illustrating that the inclusion of sensory information can also alter the perceived variety of a set of products. Thus, the inclusion of sensory information can be used flexibly to alter perceptions of both a single product and a set of choice alternatives.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"189-189"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body and gaze centered coding of touch locations during a dynamic task 动态任务中以身体和凝视为中心的触摸位置编码
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648242
Lisa M. Pritchett, Michael J. Carnevale, L. Harris
We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.
我们之前已经报道过,头部位置对触觉感知位置的影响是不同的,这取决于受试者所参与的任务的动态。当在头部旋转的情况下进行触摸和反应时,触摸位置与头部位置相反,与以身体为中心的编码一致。当头部旋转时进行触摸,但头部为中心的触摸与头部方向相同时做出反应,与以凝视为中心的编码一致。在这里,我们测试了在触摸和反应之间移动头部是否会调节头部位置对触摸位置的影响。每次试验包括三个时期,在第一个箭头和led引导受试者随机选择的头部方向(90°左,右或中心),并提供振动刺激。接下来,他们要么被引导转过头,要么留在原地。在最后一段时间里,他们再次被引导转向或保持在同一个位置,然后用鼠标和电脑屏幕在视觉尺度上报告感知到的触摸位置。在触摸呈现过程中,报告的触摸位置向与头部方向相反的方向移动,无论在响应期间的方向如何,或者在响应之前是否进行了移动。与我们之前的结果相比,效果的大小大大减小了。这些结果与在动态条件下以凝视为中心和以身体为中心的参照系中编码的触摸位置一致。
{"title":"Body and gaze centered coding of touch locations during a dynamic task","authors":"Lisa M. Pritchett, Michael J. Carnevale, L. Harris","doi":"10.1163/187847612X648242","DOIUrl":"https://doi.org/10.1163/187847612X648242","url":null,"abstract":"We have previously reported that head position affects the perceived location of touch differently depending on the dynamics of the task the subject is involved in. When touch was delivered and responses were made with head rotated touch location shifted in the opposite direction to the head position, consistent with body-centered coding. When touch was delivered with head rotated but response was made with head centered touch shifted in the same direction as the head, consistent with gaze-centered coding. Here we tested whether moving the head in-between touch and response would modulate the effects of head position on touch location. Each trial consisted of three periods, in the first arrows and LEDs guided the subject to a randomly chosen head orientation (90° left, right, or center) and a vibration stimulus was delivered. Next, they were either guided to turn their head or to remain in the same location. In the final period they again were guided to turn or to remain in the same location before reporting the perceived location of the touch on a visual scale using a mouse and computer screen. Reported touch location was shifted in the opposite direction of head orientation during touch presentation regardless of the orientation during response or whether a movement was made before the response. The size of the effect was much reduced compared to our previous results. These results are consistent with touch location being coded in both a gaze centered and body centered reference frame during dynamic conditions.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"195-195"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal processing of self-motion: Translations are processed slower than rotations 自我运动的时间处理:平移的处理速度比旋转慢
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648369
F. Soyka, M. Cowan, P. Giordano, H. Bülthoff
Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).
对纯惯性自运动刺激的反应时间(RTs)的研究很少,据我们所知,平移和旋转的反应时间的比较还不存在。我们最近提出了一个模型(Soyka et al., 2011),该模型描述了基于前庭感觉器官(耳石和半规管)动态的旋转和平移运动的方向识别阈值。该模型还预测了不同运动剖面(例如,梯形与三角形加速剖面或不同剖面持续时间)的RTs差异。为了评估这些预测,我们测量了20名参与者的8个超阈值运动概况(4个平移,4个旋转)的RTs。使用了一个两种选择的强迫选择任务,区分向左和向右的运动,并评估了每种情况下30个正确的反应。结果与从阈值测量中获得的先前确定的模型参数得出的运动剖面之间的RT差异的预测一致。为了描述绝对RT,在预测中加入一个常数,表示识别过程和按下响应按钮所需的时间。对于旋转,这个常数大约短160毫秒,因此表明平移运动需要额外的处理时间。由于这种额外的延迟不能用我们基于感觉器官动力学的模型来解释,我们推测它起源于较晚的阶段,例如在倾斜翻译消歧义期间。在评估前庭刺激与其他感官的感知时间时,我们的模型可以考虑不同的自我运动刺激(平移或旋转)的不同处理潜伏期(Barnett-Cowan and Harris, 2009;Sanders et al., 2011)。
{"title":"Temporal processing of self-motion: Translations are processed slower than rotations","authors":"F. Soyka, M. Cowan, P. Giordano, H. Bülthoff","doi":"10.1163/187847612X648369","DOIUrl":"https://doi.org/10.1163/187847612X648369","url":null,"abstract":"Reaction times (RTs) to purely inertial self-motion stimuli have only infrequently been studied, and comparisons of RTs for translations and rotations, to our knowledge, are nonexistent. We recently proposed a model (Soyka et al., 2011) which describes direction discrimination thresholds for rotational and translational motions based on the dynamics of the vestibular sensory organs (otoliths and semi-circular canals). This model also predicts differences in RTs for different motion profiles (e.g., trapezoidal versus triangular acceleration profiles or varying profile durations). In order to assess these predictions we measured RTs in 20 participants for 8 supra-threshold motion profiles (4 translations, 4 rotations). A two-alternative forced-choice task, discriminating leftward from rightward motions, was used and 30 correct responses per condition were evaluated. The results agree with predictions for RT differences between motion profiles as derived from previously identified model parameters from threshold measurements. To describe absolute RT, a constant is added to the predictions representing both the discrimination process, and the time needed to press the response button. This constant is approximately 160 ms shorter for rotations, thus indicating that additional processing time is required for translational motion. As this additional latency cannot be explained by our model based on the dynamics of the sensory organs, we speculate that it originates at a later stage, e.g., during tilt-translation disambiguation. Varying processing latencies for different self-motion stimuli (either translations or rotations) which our model can account for must be considered when assessing the perceived timing of vestibular stimulation in comparison with other senses (Barnett-Cowan and Harris, 2009; Sanders et al., 2011).","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"207-208"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648369","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1