首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Early auditory sensory processing is facilitated by visual mechanisms 早期的听觉感觉加工是由视觉机制促进的
Pub Date : 2012-01-01 DOI: 10.1163/187847612X648143
Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein
There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.
有令人信服的证据表明,低水平的感觉区域对不止一种模态敏感。例如,听觉皮层只对视觉刺激作出反应(Calvert et al., 1997;Meyer et al., 2010;Pekkola等人,2005),相反,即使在只有听觉的条件下,视觉感觉区域也会对声源做出反应(Poirier等人,2005;von Kriegstein et al., 2008;von Kriegstein and Giraud, 2006)。目前,尚不清楚是什么使大脑激活特定模式的感觉区域,仅对不同模式的输入作出反应。一个原因可能是这种激活有助于输入模态的早期感觉处理——这一假设与当前教科书知识相反。在这里,我们通过利用一种时间高度分辨率的方法,即脑磁图(MEG)来验证这一假设,以确定视觉区域对纯听觉语音识别的时间反应特征。参与者(n = 19)简短地学习了一组视听声音,即在生态有效的情况下(如在日常生活中)与说话的面孔一起学习。一旦受试者能够识别这些熟悉的声音,我们就用脑磁图测量他们的大脑反应。研究结果揭示了熟悉说话者声音感知加工的两个关键机制:(i)视觉面部敏感梭状回在非常早期的听觉加工阶段激活,即在听觉开始后仅100毫秒;(ii)听觉加工的时间促进(M200)与识别性能的提高直接相关。这些发现表明,视觉区域在非常早期的听觉处理阶段就已经起作用了,并表明大脑使用视觉机制来优化对听觉刺激的感觉处理和识别。
{"title":"Early auditory sensory processing is facilitated by visual mechanisms","authors":"Sonja Schall, S. Kiebel, B. Maess, K. Kriegstein","doi":"10.1163/187847612X648143","DOIUrl":"https://doi.org/10.1163/187847612X648143","url":null,"abstract":"There is compelling evidence that low-level sensory areas are sensitive to more than one modality. For example, auditory cortices respond to visual-only stimuli (Calvert et al., 1997; Meyer et al., 2010; Pekkola et al., 2005) and conversely, visual sensory areas respond to sound sources even in auditory-only conditions (Poirier et al., 2005; von Kriegstein et al., 2008; von Kriegstein and Giraud, 2006). Currently, it is unknown what makes the brain activate modality-specific, sensory areas solely in response to input of a different modality. One reason may be that such activations are instrumental for early sensory processing of the input modality — a hypothesis that is contrary to current text book knowledge. Here we test this hypothesis by harnessing a temporally highly resolved method, i.e., magnetoencephalography (MEG), to identify the temporal response profile of visual regions in response to auditory-only voice recognition. Participants ( n = 19 ) briefly learned a set of voices audio–visually, i.e., together with a talking face in an ecologically valid situation, as in daily life. Once subjects were able to recognize these now familiar voices, we measured their brain responses using MEG. The results revealed two key mechanisms that characterize the sensory processing of familiar speakers’ voices: (i) activation in the visual face-sensitive fusiform gyrus at very early auditory processing stages, i.e., only 100 ms after auditory onset and (ii) a temporal facilitation of auditory processing (M200) that was directly associated with improved recognition performance. These findings suggest that visual areas are instrumental already during very early auditory-only processing stages and indicate that the brain uses visual mechanisms to optimize sensory processing and recognition of auditory stimuli.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"184-185"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X648143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of video game training on the vision of adults with bilateral deprivation amblyopia. 电子游戏训练对成人双侧剥夺性弱视视力的影响。
Pub Date : 2012-01-01 DOI: 10.1163/18784763-00002391
Seong Taek Jeon, Daphne Maurer, Terri L Lewis

Amblyopia is a condition involving reduced acuity caused by abnormal visual input during a critical period beginning shortly after birth. Amblyopia is typically considered to be irreversible during adulthood. Here we provide the first demonstration that video game training can improve at least some aspects of the vision of adults with bilateral deprivation amblyopia caused by a history of bilateral congenital cataracts. Specifically, after 40 h of training over one month with an action video game, most patients showed improvement in one or both eyes on a wide variety of tasks including acuity, spatial contrast sensitivity, and sensitivity to global motion. As well, there was evidence of improvement in at least some patients for temporal contrast sensitivity, single letter acuity, crowding, and feature spacing in faces, but not for useful field of view. The results indicate that, long after the end of the critical period for damage, there is enough residual plasticity in the adult visual system to effect improvements, even in cases of deep amblyopia caused by early bilateral deprivation.

弱视是在出生后不久的一个关键时期,由于视觉输入异常导致的视力下降。弱视通常被认为在成年期是不可逆转的。在这里,我们提供了第一个证明,视频游戏训练可以改善成人双侧剥夺性弱视的某些方面的视力,这些弱视是由双侧先天性白内障引起的。具体来说,经过一个多月的动作视频游戏训练40小时后,大多数患者的一只或两只眼睛在各种任务上都有改善,包括视力、空间对比敏感度和对全局运动的敏感度。同样,至少有一些患者在时间对比敏感度、单字母敏锐度、拥挤度和面部特征间距方面有改善,但在有用的视野方面没有改善。结果表明,即使在损伤关键期结束后很长一段时间内,成人视觉系统仍有足够的剩余可塑性来实现改善,即使在早期双侧剥夺引起的深度弱视病例中也是如此。
{"title":"The effect of video game training on the vision of adults with bilateral deprivation amblyopia.","authors":"Seong Taek Jeon,&nbsp;Daphne Maurer,&nbsp;Terri L Lewis","doi":"10.1163/18784763-00002391","DOIUrl":"https://doi.org/10.1163/18784763-00002391","url":null,"abstract":"<p><p>Amblyopia is a condition involving reduced acuity caused by abnormal visual input during a critical period beginning shortly after birth. Amblyopia is typically considered to be irreversible during adulthood. Here we provide the first demonstration that video game training can improve at least some aspects of the vision of adults with bilateral deprivation amblyopia caused by a history of bilateral congenital cataracts. Specifically, after 40 h of training over one month with an action video game, most patients showed improvement in one or both eyes on a wide variety of tasks including acuity, spatial contrast sensitivity, and sensitivity to global motion. As well, there was evidence of improvement in at least some patients for temporal contrast sensitivity, single letter acuity, crowding, and feature spacing in faces, but not for useful field of view. The results indicate that, long after the end of the critical period for damage, there is enough residual plasticity in the adult visual system to effect improvements, even in cases of deep amblyopia caused by early bilateral deprivation.</p>","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 5","pages":"493-520"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/18784763-00002391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"31084659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Single-object consistency facilitates multisensory pair learning: Evidence for unitization 单对象一致性促进多感官配对学习:统一的证据
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646343
Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz
Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.
对物体的学习通常涉及到将多感官特性联系起来,比如食物的味道和气味,或者一个人的脸和声音。在这里,我们报告了联想学习中的一种新现象,在这种现象中,与单一对象一致的多感官属性对比不一致的多感官属性对学习得更好。在实验1中,我们发现,当任意配对的人脸和声音性别一致时,与性别不一致的配对相比,他们的学习能力更强,因为他们属于单一的个人身份。在实验2中,我们发现当由物种一致的动物图片和发声组成的学习对与物种不一致的学习对相比具有类似的优势。在实验3中,我们发现时间同步——它提供了一个高度可靠的替代线索,即属性来自单个对象——提高了性能,特别是对于不一致的对。总之,这些发现证明了联想学习中的一个新原则,即以单一物体为来源的多感觉配对比非单一物体的多感觉配对更容易学习。这些结果表明,将多感官属性统一为单一表征可能是一种特殊的学习机制。
{"title":"Single-object consistency facilitates multisensory pair learning: Evidence for unitization","authors":"Elan Barenholtz, D. Lewkowicz, Lauren Kogelschatz","doi":"10.1163/187847612X646343","DOIUrl":"https://doi.org/10.1163/187847612X646343","url":null,"abstract":"Learning about objects often involves associating multisensory properties such as the taste and smell of a food or the face and voice of a person. Here, we report a novel phenomenon in associative learning in which pairs of multisensory attributes that are consistent with deriving from a single object are learned better than pairs that are not. In Experiment 1, we found superior learning of arbitrary pairs of human faces and voices when they were gender-congruent — and thus were consistent with belonging to a single personal identity — compared with gender-incongruent pairs. In Experiment 2, we found a similar advantage when the learned pair consisted of species-congruent animal pictures and vocalizations vs. species-incongruent pairs. In Experiment 3, we found that temporal synchrony — which provides a highly reliable alternative cue that properties derive from a single object — improved performance specifically for the incongruent pairs. Together, these findings demonstrate a novel principle in associative learning in which multisensory pairs that are consistent with having a single object as their source are learned more easily than multisensory pairs that are not. These results suggest that unitizing multisensory properties into a single representation may be a specialized learning mechanism.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"87 1","pages":"11-11"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sounds prevent selective monitoring of high spatial frequency channels in vision 声音阻止了视觉中高空间频率通道的选择性监测
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646622
Alexis Pérez-Bellido, Joan López-Moliner, S. Soto-Faraco
Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.
对即将到来的视觉目标(Gabor斑块)的空间频率(SF)的先验知识加快了平均反应时间并降低了标准偏差。这通常被认为是视觉中SF多通道处理的证据。另一方面,多感官研究经常报道听觉和视觉信号之间存在感觉相互作用。这些相互作用导致视觉处理的增强,导致更低的感觉阈值和/或更精确的视觉估计。然而,关于多感官相互作用如何影响视觉SF的不确定性,我们知之甚少。我们进行了一项反应时间研究,在该研究中,我们操纵了视觉目标关于SF的不确定性(SF在试验中被阻断或交错),并比较了视觉和视听呈现。令人惊讶的是,对反应时间及其标准偏差的分析显示,并发声音的存在损害了对SF通道的选择性监测。此外,当相关通道在高视觉对比度下具有高SFs时,这种损伤尤其明显。我们提出,辅助声音通过大细胞通道自动地促进低SF的视觉处理,从而减损调谐到高SF的心理物理通道的潜在好处。
{"title":"Sounds prevent selective monitoring of high spatial frequency channels in vision","authors":"Alexis Pérez-Bellido, Joan López-Moliner, S. Soto-Faraco","doi":"10.1163/187847612X646622","DOIUrl":"https://doi.org/10.1163/187847612X646622","url":null,"abstract":"Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"40-40"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64426679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is maintaining balance during standing associated with inefficient audio–visual integration in older adults? 老年人站立时保持平衡是否与视听整合效率低下有关?
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646712
J. Stapleton, E. Doheny, A. Setti, C. Cunningham, L. Crosby, R. Kenny, F. Newell
It has previously been shown that older adults may be less efficient than younger adults at processing multisensory information, and that older adults with a history of falling may be less efficient than a healthy cohort when processing audio–visual stimuli (Setti et al., 2011). We investigated whether body stance has an effect on older adults’ ability to efficiently process multisensory information and also whether being presented with multisensory stimuli while standing may affect an individual’s balance. This experiment was performed by 44 participants, including both fall-prone older adults and a healthy control cohort. We tested their susceptibility to a sound-induced flash illusion (i.e., Shams et al., 2002), during both sitting and standing positions while measuring balance parameters using body-worn sensors. The results suggest that balance control in fall prone-adults was compromised relative to adults with no falls history, and this was particularly evident whilst they were presented with the auditory-flash illusion but not the non-illusory condition. Also, when the temporal window of the stimulus onset asynchrony was narrow (70 ms) fall-prone adults were more susceptible to the illusion during the standing position compared with their performance while seated, while the performance of older adults with no history of falling was unaffected by a change in position. These results suggest a link between efficient multisensory integration and balance control and have implications for interventions when fall-prone adults encounter complex multisensory information in their environment.
先前的研究表明,老年人处理多感官信息的效率可能低于年轻人,有跌倒史的老年人处理视听刺激的效率可能低于健康人群(Setti et al., 2011)。我们研究了身体姿势是否对老年人有效处理多感官信息的能力有影响,以及站立时受到多感官刺激是否会影响个人的平衡。这项实验由44名参与者进行,包括易跌倒的老年人和健康对照队列。我们测试了他们在坐着和站着时对声音引起的闪光错觉的敏感性(即Shams等人,2002),同时使用穿戴式传感器测量平衡参数。结果表明,与没有跌倒史的成年人相比,有跌倒倾向的成年人的平衡控制能力受到损害,这在他们出现幻听而非幻听的情况下尤为明显。此外,当刺激开始的时间窗口较窄(70 ms)时,跌倒倾向的成年人在站立时比坐着时更容易受到错觉的影响,而没有跌倒史的老年人的表现则不受位置变化的影响。这些结果表明,有效的多感官整合与平衡控制之间存在联系,并对易跌倒的成年人在其环境中遇到复杂的多感官信息时进行干预具有启示意义。
{"title":"Is maintaining balance during standing associated with inefficient audio–visual integration in older adults?","authors":"J. Stapleton, E. Doheny, A. Setti, C. Cunningham, L. Crosby, R. Kenny, F. Newell","doi":"10.1163/187847612X646712","DOIUrl":"https://doi.org/10.1163/187847612X646712","url":null,"abstract":"It has previously been shown that older adults may be less efficient than younger adults at processing multisensory information, and that older adults with a history of falling may be less efficient than a healthy cohort when processing audio–visual stimuli (Setti et al., 2011). We investigated whether body stance has an effect on older adults’ ability to efficiently process multisensory information and also whether being presented with multisensory stimuli while standing may affect an individual’s balance. This experiment was performed by 44 participants, including both fall-prone older adults and a healthy control cohort. We tested their susceptibility to a sound-induced flash illusion (i.e., Shams et al., 2002), during both sitting and standing positions while measuring balance parameters using body-worn sensors. The results suggest that balance control in fall prone-adults was compromised relative to adults with no falls history, and this was particularly evident whilst they were presented with the auditory-flash illusion but not the non-illusory condition. Also, when the temporal window of the stimulus onset asynchrony was narrow (70 ms) fall-prone adults were more susceptible to the illusion during the standing position compared with their performance while seated, while the performance of older adults with no history of falling was unaffected by a change in position. These results suggest a link between efficient multisensory integration and balance control and have implications for interventions when fall-prone adults encounter complex multisensory information in their environment.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"1 1","pages":"50-50"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646712","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Psychedelic synaesthesia: Evidence for a serotonergic role in synaesthesia 迷幻联觉:5 -羟色胺在联觉中作用的证据
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646956
D. Luke, D. Terhune, Ross Friday
The neurobiology of synaesthesia is receiving growing attention in the search for insights into consciousness, such as the binding problem. One way of decoding the neurocognitive mechanisms underlying this phenomenon is to investigate the induction of synaesthesia via neurochemical agents, as commonly occurs with psychedelic substances. How synaesthesia is affected by drugs can also help inform us of the neural mechanisms underlying this condition. To address these questions we surveyed a sample of recreational drug users regarding the prevalence, type and frequency of synaesthesia under the influence of psychedelics and other psychoactive substances. The results indicate that synaesthesia is frequently experienced following the consumption of serotonergic agonists such as LSD and psilocybin and that these same drugs appear to augment synaesthesia in congenital synaesthetes. These results implicate the serotonergic system in the experience of synaesthesia.
联觉的神经生物学在对意识的研究中受到越来越多的关注,比如绑定问题。破译这一现象背后的神经认知机制的一种方法是通过神经化学剂来研究联觉的诱导,就像迷幻物质通常发生的那样。药物如何影响联觉也可以帮助我们了解这种情况下的神经机制。为了解决这些问题,我们对娱乐性药物使用者样本进行了调查,了解在致幻剂和其他精神活性物质影响下联觉的流行程度、类型和频率。结果表明,在服用LSD和裸盖菇素等5 -羟色胺能激动剂后,经常会出现联觉,这些药物似乎增强了先天性联觉者的联觉。这些结果暗示了5 -羟色胺能系统在联觉的经验。
{"title":"Psychedelic synaesthesia: Evidence for a serotonergic role in synaesthesia","authors":"D. Luke, D. Terhune, Ross Friday","doi":"10.1163/187847612X646956","DOIUrl":"https://doi.org/10.1163/187847612X646956","url":null,"abstract":"The neurobiology of synaesthesia is receiving growing attention in the search for insights into consciousness, such as the binding problem. One way of decoding the neurocognitive mechanisms underlying this phenomenon is to investigate the induction of synaesthesia via neurochemical agents, as commonly occurs with psychedelic substances. How synaesthesia is affected by drugs can also help inform us of the neural mechanisms underlying this condition. To address these questions we surveyed a sample of recreational drug users regarding the prevalence, type and frequency of synaesthesia under the influence of psychedelics and other psychoactive substances. The results indicate that synaesthesia is frequently experienced following the consumption of serotonergic agonists such as LSD and psilocybin and that these same drugs appear to augment synaesthesia in congenital synaesthetes. These results implicate the serotonergic system in the experience of synaesthesia.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"74-74"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The hands have it: Hand specific vision of touch enhances touch perception and somatosensory evoked potential 手有它:手特有的触觉视觉增强了触觉感知和体感诱发电位
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646659
Brenda R. Malcolm, K. Reilly, J. Mattout, R. Salemme, O. Bertrand, M. Beauchamp, T. Ro, A. Farnè
Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.
我们从一种感官形态中准确区分信息的能力经常受到来自其他感官信息的影响。先前的研究表明,如果参与者看着一只手(与中性物体相比),并且关于触觉起源的视觉信息传达了时间和/或空间一致性,那么手部的触觉感知可能会增强。本实验进一步评估了非信息视觉对触觉感知的影响。参与者在观看视频时,对超阈值皮肤电刺激做出快速辨别反应(右手的2号或5号手指),视频显示指针处于静态位置或移动(动态),指向手的相同或不同的手指,或指向非物质物体(引擎)上相应的空间位置。因此,除了操纵视觉接触在空间上是否与同时感受到的触摸一致外,我们还操纵了接收对象的性质(手vs引擎)。从行为上讲,关于即将到来的触摸的动态视觉信息提供的时间线索减少了反应时间。此外,与空间不一致的接触相比,当参与者看到空间一致的接触时,他们的触觉辨别能力有更大的增强。最重要的是,这种视觉驱动的改善在视手条件下比在视物条件下更大。空间一致的、特定于手的视觉事件在P50体感诱发电位(SEP)中也产生了最大的振幅。我们得出的结论是,当视觉提供非预测性时空线索时,触觉感知会增强,而当看到一只手时,这些效果会特别增强。
{"title":"The hands have it: Hand specific vision of touch enhances touch perception and somatosensory evoked potential","authors":"Brenda R. Malcolm, K. Reilly, J. Mattout, R. Salemme, O. Bertrand, M. Beauchamp, T. Ro, A. Farnè","doi":"10.1163/187847612X646659","DOIUrl":"https://doi.org/10.1163/187847612X646659","url":null,"abstract":"Our ability to accurately discriminate information from one sensory modality is often influenced by information from the other senses. Previous research indicates that tactile perception on the hand may be enhanced if participants look at a hand (compared to a neutral object) and if visual information about the origin of touch conveys temporal and/or spatial congruency. The current experiment further assessed the effects of non-informative vision on tactile perception. Participants made speeded discrimination responses (digit 2 or digit 5 of their right hand) to supra-threshold electro-cutaneous stimulation while viewing a video showing a pointer, in a static position or moving (dynamic), towards the same or different digit of a hand or to the corresponding spatial location on a non-corporeal object (engine). Therefore, besides manipulating whether a visual contact was spatially congruent to the simultaneously felt touch, we also manipulated the nature of the recipient object (hand vs. engine). Behaviourally, the temporal cues provided by the dynamic visual information about an upcoming touch decreased reaction times. Additionally, a greater enhancement in tactile discrimination was present when participants viewed a spatially congruent contact compared to a spatially incongruent contact. Most importantly, this visually driven improvement was greater for the view-hand condition compared to the view-object condition. Spatially-congruent, hand-specific visual events also produced the greatest amplitude in the P50 somatosensory evoked potential (SEP). We conclude that tactile perception is enhanced when vision provides non-predictive spatio-temporal cues and that these effects are specifically enhanced when viewing a hand.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"6 1","pages":"43-43"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Audiovisual crossmodal correspondences in Autism Spectrum Disorders (ASDs) 自闭症谱系障碍(ASDs)的视听跨模态对应关系
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646668
Valeria Occelli, G. Esposito, P. Venuti, P. Walker, M. Zampini
The label ‘crossmodal correspondences’ has been used to define the nonarbitrary associations that appear to exist between different basic physical stimulus attributes in different sensory modalities. For instance, it has been consistently shown in the neurotypical population that higher pitched sounds are more frequently matched with visual patterns which are brighter, smaller, and sharper than those associated to lower pitched sounds. Some evidence suggests that patients with ASDs tend not to show this crossmodal preferential association pattern (e.g., curvilinear shapes and labial/lingual consonants vs. rectilinear shapes and plosive consonants). In the present study, we compared the performance of children with ASDs (6–15 years) and matched neurotypical controls in a non-verbal crossmodal correspondence task. The participants were asked to indicate which of two bouncing visual patterns was making a centrally located sound. In intermixed trials, the visual patterns varied in either size, surface brightness, or shape, whereas the sound varied in pitch. The results showed that, whereas the neurotypical controls reliably matched the higher pitched sound to a smaller and brighter visual pattern, the performance of participants with ASDs was at chance level. In the condition where the visual patterns differed in shape, no inter-group difference was observed. Children’s matching performance cannot be attributed to intensity matching or difficulties in understanding the instructions, which were controlled. These data suggest that the tendency to associate congruent visual and auditory features vary as a function of the presence of ASDs, possibly pointing to poorer capabilities to integrate auditory and visual inputs in this population.
“跨模态对应”这个标签已经被用来定义在不同感觉模态中存在的不同基本物理刺激属性之间的非任意关联。例如,在神经正常的人群中一直显示,高音调的声音比低音调的声音更频繁地与更明亮、更小、更锐利的视觉模式相匹配。一些证据表明,asd患者往往不表现出这种跨模优先关联模式(例如,曲线形状和唇/舌辅音与直线形状和爆炸辅音)。在本研究中,我们比较了自闭症儿童(6-15岁)和匹配的神经正常对照在非语言跨模式通信任务中的表现。参与者被要求指出两种弹跳的视觉模式中哪一种发出了位于中心位置的声音。在混合实验中,视觉图案在大小、表面亮度或形状上有所不同,而声音则在音调上有所不同。结果表明,尽管神经正常的对照组可靠地将高音调的声音与更小、更亮的视觉模式相匹配,但自闭症患者的表现却处于偶然水平。在视觉图案形状不同的情况下,没有观察到组间差异。儿童的匹配表现不能归因于匹配的强度或理解指令的困难,这是有控制的。这些数据表明,随着自闭症的存在,将一致的视觉和听觉特征联系起来的倾向有所不同,这可能表明该人群整合听觉和视觉输入的能力较差。
{"title":"Audiovisual crossmodal correspondences in Autism Spectrum Disorders (ASDs)","authors":"Valeria Occelli, G. Esposito, P. Venuti, P. Walker, M. Zampini","doi":"10.1163/187847612X646668","DOIUrl":"https://doi.org/10.1163/187847612X646668","url":null,"abstract":"The label ‘crossmodal correspondences’ has been used to define the nonarbitrary associations that appear to exist between different basic physical stimulus attributes in different sensory modalities. For instance, it has been consistently shown in the neurotypical population that higher pitched sounds are more frequently matched with visual patterns which are brighter, smaller, and sharper than those associated to lower pitched sounds. Some evidence suggests that patients with ASDs tend not to show this crossmodal preferential association pattern (e.g., curvilinear shapes and labial/lingual consonants vs. rectilinear shapes and plosive consonants). In the present study, we compared the performance of children with ASDs (6–15 years) and matched neurotypical controls in a non-verbal crossmodal correspondence task. The participants were asked to indicate which of two bouncing visual patterns was making a centrally located sound. In intermixed trials, the visual patterns varied in either size, surface brightness, or shape, whereas the sound varied in pitch. The results showed that, whereas the neurotypical controls reliably matched the higher pitched sound to a smaller and brighter visual pattern, the performance of participants with ASDs was at chance level. In the condition where the visual patterns differed in shape, no inter-group difference was observed. Children’s matching performance cannot be attributed to intensity matching or difficulties in understanding the instructions, which were controlled. These data suggest that the tendency to associate congruent visual and auditory features vary as a function of the presence of ASDs, possibly pointing to poorer capabilities to integrate auditory and visual inputs in this population.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"44-44"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646668","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of looming and static sounds on somatosensory processing: A MEG study 隐隐约约和静态声音对躯体感觉加工的影响:一项脑磁图研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647270
E. Leonardelli, Valeria Occelli, G. Demarchi, M. Grassi, C. Braun, M. Zampini
The present study aims to assess the mechanisms involved in the processing of potentially threatening stimuli presented within the peri-head space of humans. Magnetic fields evoked by air-puffs presented at the peri-oral area of fifteen participants were recorded by using magnetoencephalography (MEG). Crucially, each air puff was preceded by a sound, which could be either perceived as looming, stationary and close to the body (i.e., within the peri-head space) or stationary and far from the body (i.e., extrapersonal space). The comparison of the time courses of the global field power (GFP) indicated a significant difference in the time window ranging from 70 to 170 ms between the conditions. When the air puff was preceded by a stationary sound located far from the head stronger somatosensory activity was evoked as compared to the conditions where the sounds were located close to the head. No difference could be shown for the looming and the stationary prime stimulus close to the head. Source localization was performed assuming a pair of symmetric dipoles in a spherical head model that was fitted to the MRI images of the individual participants. Results showed sources in primary and secondary somatosensory cortex. Source activities in secondary somatosensory cortex differed between the three conditions, with larger effects evoked by the looming sounds and smaller effects evoked by the far stationary sounds, and the close stationary sounds evoking intermediate effects. Overall, these findings suggest the existence of a system involved in the detection of approaching objects and protecting the body from collisions in humans.
本研究的目的是评估在人类头周空间内呈现的潜在威胁刺激的处理机制。用脑磁图(MEG)记录了15名被试者口周区域由充气引起的磁场。至关重要的是,每次吹气之前都有一个声音,这个声音可以被感知为若隐若现、静止且靠近身体(即在头周空间内),也可以被感知为静止且远离身体(即超个人空间)。结果表明,在70 ~ 170 ms的时间窗范围内,两种条件下的全球电场功率(GFP)的时间过程存在显著差异。当吹气之前是远离头部的静止声音时,与声音靠近头部的情况相比,更强的体感活动被唤起。接近头部的初始刺激和静止的初始刺激没有差异。假设球形头部模型中有一对对称偶极子,该模型与个体参与者的MRI图像相匹配,则进行源定位。结果表明,其来源主要在初级和次级体感觉皮层。在三种情况下,次级体感皮层的源活动存在差异,近距离静止的声音引起的效应较大,远距离静止的声音引起的效应较小,近距离静止的声音引起中间效应。总的来说,这些发现表明存在一个涉及探测接近物体和保护人体免受碰撞的系统。
{"title":"Effects of looming and static sounds on somatosensory processing: A MEG study","authors":"E. Leonardelli, Valeria Occelli, G. Demarchi, M. Grassi, C. Braun, M. Zampini","doi":"10.1163/187847612X647270","DOIUrl":"https://doi.org/10.1163/187847612X647270","url":null,"abstract":"The present study aims to assess the mechanisms involved in the processing of potentially threatening stimuli presented within the peri-head space of humans. Magnetic fields evoked by air-puffs presented at the peri-oral area of fifteen participants were recorded by using magnetoencephalography (MEG). Crucially, each air puff was preceded by a sound, which could be either perceived as looming, stationary and close to the body (i.e., within the peri-head space) or stationary and far from the body (i.e., extrapersonal space). The comparison of the time courses of the global field power (GFP) indicated a significant difference in the time window ranging from 70 to 170 ms between the conditions. When the air puff was preceded by a stationary sound located far from the head stronger somatosensory activity was evoked as compared to the conditions where the sounds were located close to the head. No difference could be shown for the looming and the stationary prime stimulus close to the head. Source localization was performed assuming a pair of symmetric dipoles in a spherical head model that was fitted to the MRI images of the individual participants. Results showed sources in primary and secondary somatosensory cortex. Source activities in secondary somatosensory cortex differed between the three conditions, with larger effects evoked by the looming sounds and smaller effects evoked by the far stationary sounds, and the close stationary sounds evoking intermediate effects. Overall, these findings suggest the existence of a system involved in the detection of approaching objects and protecting the body from collisions in humans.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"94-94"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ERP investigations into the effects of gaze and spatial attention on the processing of tactile events 注视和空间注意对触觉事件加工影响的ERP研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647784
Elena Gherri, Bettina Forster
Previous research demonstrated that directing one’s gaze at a body part reduces detection speed (e.g., Tipper et al., 1998) and enhances the processing (Forster and Eimer, 2005) of tactile stimuli presented at the gazed location. Interestingly, gaze-dependent modulation of somatosensory evoked potentials (SEPs), are very similar to those observed in previous studies of tactile spatial attention. This might indicate that manipulating gaze direction activates the same mechanisms that are responsible for the covert orienting of spatial attention in touch. To investigate this possibility, gaze direction and sustained tactile attention were orthogonally manipulated in the present study. In different blocks of trials, participants focused their attention on the left or right hand while gazing to the attended or to the unattended hand while they had to respond to infrequent tactile targets presented to the attended hand. Analyses of the SEPs elicited by tactile non-target stimuli demonstrate that gaze and attention influence different stages of tactile processing. While gaze is able to modulate tactile processing already 50 ms after stimulus onset, attentional SEP modulations are only observed beyond 110 ms post-stimulus. This dissociation in the timing and therefore the associated locus of the effects of gaze and attention on somatosensory processing reveals that the effect of gaze on tactile processing is independent of tactile attention.
先前的研究表明,将一个人的目光指向身体的某个部位会降低检测速度(例如,Tipper等人,1998年),并增强对在被凝视位置呈现的触觉刺激的处理(Forster和Eimer, 2005年)。有趣的是,体感诱发电位(SEPs)的注视依赖性调节与之前在触觉空间注意研究中观察到的非常相似。这可能表明,操纵凝视方向激活了与触觉中空间注意力隐蔽定向相同的机制。为了研究这种可能性,本研究对凝视方向和持续触觉注意进行了正交操作。在不同的实验块中,参与者将注意力集中在左手或右手上,同时盯着有人看管的手或无人看管的手,同时他们必须对出现在有人看管的手上的不常见的触觉目标做出反应。触觉非目标刺激诱发的sep分析表明,凝视和注意影响着触觉加工的不同阶段。虽然凝视在刺激开始后50毫秒就能够调节触觉加工,但注意SEP的调节仅在刺激后110毫秒以上才被观察到。注视和注意对躯体感觉加工的影响在时间上的分离以及由此产生的相关轨迹表明,注视对触觉加工的影响是独立于触觉注意的。
{"title":"ERP investigations into the effects of gaze and spatial attention on the processing of tactile events","authors":"Elena Gherri, Bettina Forster","doi":"10.1163/187847612X647784","DOIUrl":"https://doi.org/10.1163/187847612X647784","url":null,"abstract":"Previous research demonstrated that directing one’s gaze at a body part reduces detection speed (e.g., Tipper et al., 1998) and enhances the processing (Forster and Eimer, 2005) of tactile stimuli presented at the gazed location. Interestingly, gaze-dependent modulation of somatosensory evoked potentials (SEPs), are very similar to those observed in previous studies of tactile spatial attention. This might indicate that manipulating gaze direction activates the same mechanisms that are responsible for the covert orienting of spatial attention in touch. To investigate this possibility, gaze direction and sustained tactile attention were orthogonally manipulated in the present study. In different blocks of trials, participants focused their attention on the left or right hand while gazing to the attended or to the unattended hand while they had to respond to infrequent tactile targets presented to the attended hand. Analyses of the SEPs elicited by tactile non-target stimuli demonstrate that gaze and attention influence different stages of tactile processing. While gaze is able to modulate tactile processing already 50 ms after stimulus onset, attentional SEP modulations are only observed beyond 110 ms post-stimulus. This dissociation in the timing and therefore the associated locus of the effects of gaze and attention on somatosensory processing reveals that the effect of gaze on tactile processing is independent of tactile attention.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":"25 1","pages":"146-146"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647784","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1