首页 > 最新文献

Seeing and Perceiving最新文献

英文 中文
Infant perception of audiovisual synchrony in fluent speech 婴儿对流利言语中视听同步性的感知
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646587
F. Pons, D. Lewkowicz
It is known that perception of audio–visual (A–V) temporal relations is affected by the type of stimulus used. This includes differences in A–V temporal processing of speech and non-speech events and of native vs. non-native speech. Similar differences have been found early in life, but no studies have investigated infant response to A–V temporal relations in fluent speech. Extant studies (Lewkowicz, 2010) investigating infant response to isolated syllables have found that infants can detect an A–V asynchrony (auditory leading visual) of 666 ms but not lower. Here, we investigated infant response to A–V asynchrony in fluent speech and whether linguistic experience plays a role in responsiveness. To do so, we tested 24 monolingual Spanish-learning and 24 monolingual Catalan-learning 8-month-old infants. First, we habituated the infants to an audiovisually synchronous video clip of a person speaking in Spanish and then tested them in separate test trials for detection of different degrees of A–V asynchrony (audio preceding video by 366, 500 or 666 ms). We found that infants detected A–V asynchronies of 666 and 500 ms and that they did so regardless of linguistic background. Thus, compared to previous results from infant studies with isolated audiovisual syllables, here we found that infants are more sensitive to A–V temporal relations inherent in fluent speech. Furthermore, given that responsiveness to non-native speech narrows during the first year of life, the absence of a language effect suggests that perceptual narrowing of A–V synchrony detection has not completed by 8 months of age.
众所周知,对视听(A-V)时间关系的感知受到所使用的刺激类型的影响。这包括语音和非语音事件以及母语和非母语语音的A-V时间处理的差异。在生命早期也发现了类似的差异,但没有研究调查婴儿对流利言语中A-V时间关系的反应。现存的研究(Lewkowicz, 2010)调查了婴儿对孤立音节的反应,发现婴儿可以检测到666毫秒的A-V异步(听觉领先视觉),但不能更低。在此,我们研究了婴儿在流利言语中对a - v异步的反应,以及语言经验是否在反应中起作用。为此,我们测试了24个单语西班牙语学习者和24个单语加泰罗尼亚语学习者8个月大的婴儿。首先,我们让婴儿习惯了一个人说西班牙语的视听同步视频片段,然后在不同的测试试验中对他们进行测试,以检测不同程度的a - v不同步(音频先于视频,间隔366,500或666毫秒)。我们发现,无论语言背景如何,婴儿都能检测到666和500毫秒的A-V异步。因此,与先前对孤立视听音节的婴儿研究结果相比,我们发现婴儿对流利言语中固有的A-V时间关系更为敏感。此外,考虑到对非母语语言的反应在出生后的第一年会减弱,语言效应的缺失表明,在8个月大的时候,对a - v同步检测的感知减弱还没有完成。
{"title":"Infant perception of audiovisual synchrony in fluent speech","authors":"F. Pons, D. Lewkowicz","doi":"10.1163/187847612X646587","DOIUrl":"https://doi.org/10.1163/187847612X646587","url":null,"abstract":"It is known that perception of audio–visual (A–V) temporal relations is affected by the type of stimulus used. This includes differences in A–V temporal processing of speech and non-speech events and of native vs. non-native speech. Similar differences have been found early in life, but no studies have investigated infant response to A–V temporal relations in fluent speech. Extant studies (Lewkowicz, 2010) investigating infant response to isolated syllables have found that infants can detect an A–V asynchrony (auditory leading visual) of 666 ms but not lower. Here, we investigated infant response to A–V asynchrony in fluent speech and whether linguistic experience plays a role in responsiveness. To do so, we tested 24 monolingual Spanish-learning and 24 monolingual Catalan-learning 8-month-old infants. First, we habituated the infants to an audiovisually synchronous video clip of a person speaking in Spanish and then tested them in separate test trials for detection of different degrees of A–V asynchrony (audio preceding video by 366, 500 or 666 ms). We found that infants detected A–V asynchronies of 666 and 500 ms and that they did so regardless of linguistic background. Thus, compared to previous results from infant studies with isolated audiovisual syllables, here we found that infants are more sensitive to A–V temporal relations inherent in fluent speech. Furthermore, given that responsiveness to non-native speech narrows during the first year of life, the absence of a language effect suggests that perceptual narrowing of A–V synchrony detection has not completed by 8 months of age.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646587","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Crossmodal correspondences between chemosensory stimuli and musical notes 化学感觉刺激与音符之间的跨模态对应关系
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646938
A. Crisinel, C. Spence
We report a series of experiments investigating crossmodal correspondences between various food-related stimuli (water-based solutions, milk-based flavoured solutions, crisps, chocolate and odours) and sounds varying in pitch and played by four different types of musical instruments. Participants tasted or smelled stimuli before matching them to a musical note. Our results demonstrate that participants preferentially match certain stimuli to specific pitches and instrument types. Through participants’ ratings of the stimuli along a number of dimensions (e.g., pleasantness, complexity, familiarity or sweetness), we explore the psychological dimensions involved in these crossmodal correspondences, using principal components analysis (PCA). While pleasantness seems to play an important role in the choice of instrument associated with chemosensory stimuli, the pitch seems to also depend on the quality of the taste (bitter, salty, sour or sweet). The level at which such crossmodal correspondences might occur, as well as the potential applications of such results, will be discussed.
我们报告了一系列实验,研究了各种食物相关刺激(水基溶液、牛奶基调味溶液、薯片、巧克力和气味)与不同音调的声音之间的跨模态对应关系,这些声音由四种不同类型的乐器演奏。在将刺激物与音符匹配之前,参与者先品尝或闻到刺激物。我们的研究结果表明,参与者优先将某些刺激与特定的音高和乐器类型相匹配。通过参与者对刺激在多个维度上的评分(如愉快、复杂、熟悉或甜蜜),我们利用主成分分析(PCA)探索了涉及这些跨模态对应的心理维度。虽然愉悦感似乎在与化学感官刺激相关的乐器选择中起着重要作用,但音调似乎也取决于味道的质量(苦、咸、酸或甜)。将讨论这种跨模态对应可能发生的水平,以及这种结果的潜在应用。
{"title":"Crossmodal correspondences between chemosensory stimuli and musical notes","authors":"A. Crisinel, C. Spence","doi":"10.1163/187847612X646938","DOIUrl":"https://doi.org/10.1163/187847612X646938","url":null,"abstract":"We report a series of experiments investigating crossmodal correspondences between various food-related stimuli (water-based solutions, milk-based flavoured solutions, crisps, chocolate and odours) and sounds varying in pitch and played by four different types of musical instruments. Participants tasted or smelled stimuli before matching them to a musical note. Our results demonstrate that participants preferentially match certain stimuli to specific pitches and instrument types. Through participants’ ratings of the stimuli along a number of dimensions (e.g., pleasantness, complexity, familiarity or sweetness), we explore the psychological dimensions involved in these crossmodal correspondences, using principal components analysis (PCA). While pleasantness seems to play an important role in the choice of instrument associated with chemosensory stimuli, the pitch seems to also depend on the quality of the taste (bitter, salty, sour or sweet). The level at which such crossmodal correspondences might occur, as well as the potential applications of such results, will be discussed.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial and temporal dynamics of visual processing during movement preparation: ERP evidence from adults with and without Developmental Coordination Disorder 运动准备过程中视觉加工的时空动态:来自有和无发育性协调障碍成人的ERP证据
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646965
J. Velzen
Experimental evidence has shown that the actions we intend to perform influence the way our visual system processes information in the environment, consistent with the considerable overlap observed between brain circuits involved in action and attention. Conceptual thinking about action-perception links in cognitive science is heavily influenced by earlier work that has established that motor preparation causes a shift of attention to the goal of a movement. This sensory enhancement is characterised on a behavioural level by improved detection and discrimination performance at that location, and neurally by larger responses in visual cortex to stimuli presented there. In a series of experiments we examined electrophysiological visual cortex responses (ERPs) to task-irrelevant visual probe stimuli presented at various locations in movement space during preparation of manual reaching movements. The data from these experiments show simultaneous enhanced visual processing of stimuli at the location of the effector about to perform the movement and at the goal of the movement. Further, our data demonstrates that compared to controls, adults with Developmental Coordination Disorder show a markedly different pattern of enhanced visual processing during preparation of more complex reaching movement, i.e., across the body midline. This suggests a specific difficulty in this group in recruiting appropriate preparatory visual mechanism for manual movements, which may be related to the difficulties this group experiences in their daily life.
实验证据表明,我们打算采取的行动会影响我们的视觉系统在环境中处理信息的方式,这与在涉及行动和注意力的大脑回路之间观察到的相当大的重叠是一致的。认知科学中关于动作-知觉联系的概念性思考深受早期研究的影响。早期研究发现,运动准备会导致注意力向运动目标转移。这种感觉增强在行为水平上表现为该位置的检测和辨别能力的提高,在神经上表现为视觉皮层对该位置呈现的刺激的更大反应。在一系列实验中,我们研究了在准备手触动作时,在运动空间的不同位置呈现的与任务无关的视觉探针刺激的电生理视觉皮层反应(ERPs)。这些实验的数据表明,在即将执行动作的执行者的位置和动作的目标处,刺激的视觉处理同时增强。此外,我们的数据表明,与对照组相比,患有发育性协调障碍的成年人在准备更复杂的伸手动作(即跨越身体中线)时,表现出明显不同的视觉加工增强模式。这表明这一群体在为手部动作建立适当的预备视觉机制方面存在特殊困难,这可能与这一群体在日常生活中遇到的困难有关。
{"title":"Spatial and temporal dynamics of visual processing during movement preparation: ERP evidence from adults with and without Developmental Coordination Disorder","authors":"J. Velzen","doi":"10.1163/187847612X646965","DOIUrl":"https://doi.org/10.1163/187847612X646965","url":null,"abstract":"Experimental evidence has shown that the actions we intend to perform influence the way our visual system processes information in the environment, consistent with the considerable overlap observed between brain circuits involved in action and attention. Conceptual thinking about action-perception links in cognitive science is heavily influenced by earlier work that has established that motor preparation causes a shift of attention to the goal of a movement. This sensory enhancement is characterised on a behavioural level by improved detection and discrimination performance at that location, and neurally by larger responses in visual cortex to stimuli presented there. In a series of experiments we examined electrophysiological visual cortex responses (ERPs) to task-irrelevant visual probe stimuli presented at various locations in movement space during preparation of manual reaching movements. The data from these experiments show simultaneous enhanced visual processing of stimuli at the location of the effector about to perform the movement and at the goal of the movement. Further, our data demonstrates that compared to controls, adults with Developmental Coordination Disorder show a markedly different pattern of enhanced visual processing during preparation of more complex reaching movement, i.e., across the body midline. This suggests a specific difficulty in this group in recruiting appropriate preparatory visual mechanism for manual movements, which may be related to the difficulties this group experiences in their daily life.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646965","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effects of rehearsal on auditory cortex: An fMRI study of the putative neural mechanisms of dance therapy 排练对听觉皮层的影响:舞蹈治疗的假设神经机制的fMRI研究
Pub Date : 2012-01-01 DOI: 10.1163/187847612X646677
J. DeSouza, Rachel J. Bar
We were interested in examining the time course of the evolution when beginning to learn a motor habit and it’s associated neural functional changes in the brain. To accomplish this we employed five professional dancers that were scanned using a within subjects design. Each dancer participated in four fMRI (functional magnetic resonance imaging) scanning sessions over the training and learning of a dance to a 1 min piece of music employing a typical blocked design (5 epochs with alternations of a 30-s fixation period). We also tested five control subjects that had dance experience but did not learn the dance to this music. Subjects were asked to visualize dancing while listening to a piece of music. At the first scanning session, only 4 rehearsals of the piece (initial acquisition phase) were studied. The control subjects were also tested at this time period but they had no rehearsals and had no visual exposure to the music before scanning. The second scanning session occurred one week later, after a total of 9 rehearsals. The third scanning session was completed 7 weeks after initial acquisition of the dance (the dance was performed a total of 16 times after initial training). Thus in total there were 22 scanning sessions using 10 subjects. Additionally a control motor scan was performed in each scanning session to activate motor regions that should not change activation patterns across all scanning sessions. Results revealed a significant increase of BOLD signal, across the sessions in a network of brain regions including bilateral auditory cortex to supplementary motor cortex. These results suggest that as we learn a motor sequence from music, greater neuronal activity occurs and we discuss the potential neural network involved in dance and its implications towards alternative neural regions that are potentially recruited during dance therapy.
我们感兴趣的是研究在开始学习运动习惯时进化的时间进程,以及大脑中与之相关的神经功能变化。为了做到这一点,我们雇用了五名专业舞者,使用受试者内设计对他们进行扫描。在训练和学习一段1分钟的音乐时,每个舞者都参加了四次功能性磁共振成像(fMRI)扫描,采用典型的阻塞设计(5次,每次30秒的固定时间交替)。我们还测试了5个有舞蹈经验但没有学过跳舞的对照对象。实验对象被要求在听音乐时想象跳舞的情景。在第一次扫描阶段,只研究了该作品的4次排练(初始获取阶段)。对照组在这段时间也接受了测试,但他们没有排练,也没有在扫描前看到音乐。第二次扫描在一周后进行,总共进行了9次排练。第三次扫描在初次学习舞蹈7周后完成(初始训练后共进行16次舞蹈)。因此,总共有22次扫描,涉及10名受试者。此外,在每次扫描过程中进行控制运动扫描,以激活在所有扫描过程中不应改变激活模式的运动区域。结果显示,BOLD信号在包括双侧听觉皮层到辅助运动皮层在内的大脑区域网络中显著增加。这些结果表明,当我们从音乐中学习运动序列时,会发生更大的神经元活动,我们讨论了舞蹈中涉及的潜在神经网络及其对舞蹈治疗中可能招募的替代神经区域的影响。
{"title":"The effects of rehearsal on auditory cortex: An fMRI study of the putative neural mechanisms of dance therapy","authors":"J. DeSouza, Rachel J. Bar","doi":"10.1163/187847612X646677","DOIUrl":"https://doi.org/10.1163/187847612X646677","url":null,"abstract":"We were interested in examining the time course of the evolution when beginning to learn a motor habit and it’s associated neural functional changes in the brain. To accomplish this we employed five professional dancers that were scanned using a within subjects design. Each dancer participated in four fMRI (functional magnetic resonance imaging) scanning sessions over the training and learning of a dance to a 1 min piece of music employing a typical blocked design (5 epochs with alternations of a 30-s fixation period). We also tested five control subjects that had dance experience but did not learn the dance to this music. Subjects were asked to visualize dancing while listening to a piece of music. At the first scanning session, only 4 rehearsals of the piece (initial acquisition phase) were studied. The control subjects were also tested at this time period but they had no rehearsals and had no visual exposure to the music before scanning. The second scanning session occurred one week later, after a total of 9 rehearsals. The third scanning session was completed 7 weeks after initial acquisition of the dance (the dance was performed a total of 16 times after initial training). Thus in total there were 22 scanning sessions using 10 subjects. Additionally a control motor scan was performed in each scanning session to activate motor regions that should not change activation patterns across all scanning sessions. Results revealed a significant increase of BOLD signal, across the sessions in a network of brain regions including bilateral auditory cortex to supplementary motor cortex. These results suggest that as we learn a motor sequence from music, greater neuronal activity occurs and we discuss the potential neural network involved in dance and its implications towards alternative neural regions that are potentially recruited during dance therapy.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X646677","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Watching touch increases people’s alertness to tactile stimuli presented on the body surface 观看触摸会增加人们对体表触觉刺激的警觉性
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647432
V. Bellan, C. Reverberi, A. Gallace
Several studies have shown that watching one’s own body part improves tactile acuity and discrimination abilities for stimuli presented on that location. In a series of experiments we asked the participants to localize tactile stimuli presented on the left or right arm. In Experiment 1 the participants were not allowed to watch their body, but they could see another person’s left arm via a LCD display. This arm could be touched or not during the presentation of the stimuli. We found that when the participants saw the arm on the screen being touched, their responses to the tactile stimuli presented on the left and on the right arm were faster and more accurate than when the arm on the screen was approached but not touched. Critically, we did not find any illusion of ownership related to the hand seen on the screen. In Experiments 2 and 3 we varied the position of the screen with respect to the participant’s body midline and the image displayed on it (an arm or an object of equal size). The participants gave faster responses when an object rather than a hand was displayed on the screen. Moreover, the responses were slower when the hand on the screen was placed in front of the participants, as compared to any other position. Taken together the results of our experiments would seem to suggest that watching touch activates multisensory mechanisms responsible for alerting people regarding the possible presence of tactile stimuli on the body surface.
几项研究表明,观看自己的身体部位可以提高触觉敏锐度和对该部位出现的刺激的辨别能力。在一系列实验中,我们要求参与者定位呈现在左臂或右臂上的触觉刺激。在实验1中,参与者不允许看自己的身体,但他们可以通过液晶显示器看到另一个人的左臂。在呈现刺激时,这只手臂可以被触摸,也可以不被触摸。我们发现,当参与者看到屏幕上的手臂被触摸时,他们对呈现在左臂和右臂上的触觉刺激的反应比接近屏幕上的手臂但没有触摸时更快、更准确。关键是,我们没有发现任何与屏幕上看到的手有关的所有权错觉。在实验2和3中,我们根据参与者的身体中线和屏幕上显示的图像(手臂或同等大小的物体)改变了屏幕的位置。当屏幕上显示一个物体而不是一只手时,参与者的反应速度更快。此外,与其他位置相比,当屏幕上的手放在参与者面前时,反应要慢得多。综上所述,我们的实验结果似乎表明,观看触摸会激活多感官机制,这些机制负责提醒人们注意身体表面可能存在的触觉刺激。
{"title":"Watching touch increases people’s alertness to tactile stimuli presented on the body surface","authors":"V. Bellan, C. Reverberi, A. Gallace","doi":"10.1163/187847612X647432","DOIUrl":"https://doi.org/10.1163/187847612X647432","url":null,"abstract":"Several studies have shown that watching one’s own body part improves tactile acuity and discrimination abilities for stimuli presented on that location. In a series of experiments we asked the participants to localize tactile stimuli presented on the left or right arm. In Experiment 1 the participants were not allowed to watch their body, but they could see another person’s left arm via a LCD display. This arm could be touched or not during the presentation of the stimuli. We found that when the participants saw the arm on the screen being touched, their responses to the tactile stimuli presented on the left and on the right arm were faster and more accurate than when the arm on the screen was approached but not touched. Critically, we did not find any illusion of ownership related to the hand seen on the screen. In Experiments 2 and 3 we varied the position of the screen with respect to the participant’s body midline and the image displayed on it (an arm or an object of equal size). The participants gave faster responses when an object rather than a hand was displayed on the screen. Moreover, the responses were slower when the hand on the screen was placed in front of the participants, as compared to any other position. Taken together the results of our experiments would seem to suggest that watching touch activates multisensory mechanisms responsible for alerting people regarding the possible presence of tactile stimuli on the body surface.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Within and cross-sensory interactions in the perceived attractiveness of unfamiliar faces 感知陌生面孔吸引力的内部和跨感官互动
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647126
Brendan Cullen, F. Newell
Major findings in attractiveness such as the role of averageness and symmetry have emerged primarily from neutral static visual stimuli. However it has increasingly been shown that ratings of attractiveness can be modulated within unisensory and multisensory modes by factors including emotional expression or by additional information about the person. For example, previous research has indicated that humorous individuals are rated as more desirable than their non-humorous equivalents (Bressler and Balshine, 2006). In two experiments we measured within and cross-sensory modulation of the attractiveness of unfamiliar faces. In Experiment 1 we examined if manipulating the number and type of expressions shown across a series of images of a person influences the attractiveness rating for that person. Results indicate that for happy expressions, ratings of attractiveness gradually increase as the proportional number of happy facial expressions increase, relative to the number of neutral expressions. In contrast, an increase in the proportion of angry expressions was not assocated with an increase in attractiveness ratings. In Experiment 2 we investigated if perceived attractiveness can be influenced by multisensory information provided during exposure to the face image. Ratings are compared across face images which were presented with or without voice information. In addition we provided either an auditory emotional cue (e.g., laughter) or neutral (e.g., coughing) cue to assess whether social information affects perceived attractiveness. Results shows that multisensory information about a person can increase attractiveness ratings, but that the emotional content of the cross-modal information can effect preference for some faces over others.
关于吸引力的主要发现,如平均和对称的作用,主要来自中性的静态视觉刺激。然而,越来越多的研究表明,吸引力的评级可以在单感官和多感官模式下通过情绪表达或有关该人的其他信息等因素进行调节。例如,先前的研究表明,幽默的人比不幽默的人更受欢迎(Bressler和Balshine, 2006)。在两个实验中,我们测量了不熟悉面孔的吸引力的内部和跨感官调节。在实验1中,我们检验了在一系列人物的照片中操纵表情的数量和类型是否会影响这个人的吸引力评级。结果表明,对于快乐的表情,相对于中性的表情,随着快乐面部表情的比例增加,吸引力的评分逐渐增加。相比之下,愤怒表情比例的增加与吸引力评分的增加无关。在实验2中,我们研究了在暴露于人脸图像时提供的多感官信息是否会影响感知到的吸引力。评分是通过带有或不带有语音信息的面部图像进行比较的。此外,我们提供了听觉情感线索(如笑声)或中性线索(如咳嗽)来评估社会信息是否影响感知吸引力。结果表明,关于一个人的多感官信息可以提高吸引力评级,但跨模态信息的情感内容可能会影响对某些面孔的偏好。
{"title":"Within and cross-sensory interactions in the perceived attractiveness of unfamiliar faces","authors":"Brendan Cullen, F. Newell","doi":"10.1163/187847612X647126","DOIUrl":"https://doi.org/10.1163/187847612X647126","url":null,"abstract":"Major findings in attractiveness such as the role of averageness and symmetry have emerged primarily from neutral static visual stimuli. However it has increasingly been shown that ratings of attractiveness can be modulated within unisensory and multisensory modes by factors including emotional expression or by additional information about the person. For example, previous research has indicated that humorous individuals are rated as more desirable than their non-humorous equivalents (Bressler and Balshine, 2006). In two experiments we measured within and cross-sensory modulation of the attractiveness of unfamiliar faces. In Experiment 1 we examined if manipulating the number and type of expressions shown across a series of images of a person influences the attractiveness rating for that person. Results indicate that for happy expressions, ratings of attractiveness gradually increase as the proportional number of happy facial expressions increase, relative to the number of neutral expressions. In contrast, an increase in the proportion of angry expressions was not assocated with an increase in attractiveness ratings. In Experiment 2 we investigated if perceived attractiveness can be influenced by multisensory information provided during exposure to the face image. Ratings are compared across face images which were presented with or without voice information. In addition we provided either an auditory emotional cue (e.g., laughter) or neutral (e.g., coughing) cue to assess whether social information affects perceived attractiveness. Results shows that multisensory information about a person can increase attractiveness ratings, but that the emotional content of the cross-modal information can effect preference for some faces over others.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647126","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditory signal dominates visual in the perception of emotional social interactions 在情感社会互动的感知中,听觉信号占主导地位
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647450
L. Piwek, K. Petrini, F. Pollick
Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.
情感的多模态感知通常是通过单个角色的表现(例如,一个演员的脸-声音和/或肢体-声音)来检验的。我们将研究扩展到更复杂的,结合语音的二元点光显示。一个动作和声音捕捉系统记录了20位演员在情侣中以快乐、愤怒和中性的情绪表情互动。获得的刺激在一项初步研究中得到验证,并在本研究中用于调查情感社会互动的多模态感知。参与者被要求对视觉上、听觉上或情感上一致和不一致的双峰显示的快乐和愤怒的表情进行分类。在一系列的交叉验证实验中,我们发现声音在情感社会互动的感知中占主导地位。虽然参与者的判断在双峰条件下更快,但判断的准确性在双峰条件和听觉条件下是相似的。当参与者观看情感不匹配的双峰显示时,他们主要将判断导向听觉信号而不是视觉信号。即使当听觉信号的可靠性因噪音而降低时,这种听觉优势仍然存在,尽管视觉信息与嘈杂的听觉信号结合在一起时对情绪的判断有一定影响。我们的研究结果表明,当我们从观察到的社交互动中判断情绪时,我们主要依赖于对话中的声音线索,而不是来自他们身体动作的视觉线索。
{"title":"Auditory signal dominates visual in the perception of emotional social interactions","authors":"L. Piwek, K. Petrini, F. Pollick","doi":"10.1163/187847612X647450","DOIUrl":"https://doi.org/10.1163/187847612X647450","url":null,"abstract":"Multimodal perception of emotions has been typically examined using displays of a solitary character (e.g., the face–voice and/or body–sound of one actor). We extend investigation to more complex, dyadic point-light displays combined with speech. A motion and voice capture system was used to record twenty actors interacting in couples with happy, angry and neutral emotional expressions. The obtained stimuli were validated in a pilot study and used in the present study to investigate multimodal perception of emotional social interactions. Participants were required to categorize happy and angry expressions displayed visually, auditorily, or using emotionally congruent and incongruent bimodal displays. In a series of cross-validation experiments we found that sound dominated the visual signal in the perception of emotional social interaction. Although participants’ judgments were faster in the bimodal condition, the accuracy of judgments was similar for both bimodal and auditory-only conditions. When participants watched emotionally mismatched bimodal displays, they predominantly oriented their judgments towards the auditory rather than the visual signal. This auditory dominance persisted even when the reliability of auditory signal was decreased with noise, although visual information had some effect on judgments of emotions when it was combined with a noisy auditory signal. Our results suggest that when judging emotions from observed social interaction, we rely primarily on vocal cues from the conversation, rather then visual cues from their body movement.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647450","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Short and sweet, or long and complex? Perceiving temporal synchrony in audiovisual events 简短而甜蜜,还是冗长而复杂?在视听事件中感知时间同步
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647298
Ragnhild Eg, D. Behne
Perceived synchrony varies depending on the audiovisual event. Typically, asynchrony is tolerated at greater lead- and lag-times for speech and music than for action events. The tolerance for asynchrony in speech has been attributed to the unity assumption, which proposes a bonding of auditory and visual speech cues through associations in several dimensions. However, the variations in synchrony perception for different audiovisual events may simply be related to their complexity; where speech and music fluctuate naturally, actions involve isolated events and anticipated moments of impact. The current study measured perception of synchrony for long (13 s) and short (1 s) variants of three types of stimuli: (1) action, represented by a game of chess, (2) music, played by a drummer and (3) speech, presented by an anchorwoman in a newscast. The long variants allowed events to play out with their natural dynamics, whereas short variants offered controlled and predictable single actions or events, selected from the longer segments. Results show that among the long stimuli, lead asynchrony was detected sooner for speech than for chess. This contrasts both with previous research and our own predictions, although it may be related to characteristics of the selected chess scene. Interestingly, tolerance to asynchrony was generally greater for short, than for long, stimuli, especially for speech. These findings suggest that the dynamics of complex events cannot account for previously observed differences in synchrony perception between speech and action events.
感知到的同步性因视听事件而异。通常情况下,语音和音乐的前置和滞后时间比动作事件更能容忍异步。对言语异步的容忍归因于统一假设,该假设提出了听觉和视觉言语线索通过几个维度的联系联系起来。然而,对不同视听事件的同步感知的变化可能仅仅与它们的复杂性有关;在语言和音乐自然波动的地方,行动涉及孤立的事件和预期的影响时刻。目前的研究测量了三种类型刺激的长(13秒)和短(1秒)变量的同步性感知:(1)动作,以下棋为代表;(2)音乐,由鼓手演奏;(3)演讲,由女主播在新闻节目中呈现。长变量允许事件以其自然动态进行,而短变量提供可控制和可预测的单个动作或事件,从较长的片段中选择。结果表明,在长时间刺激中,言语刺激比象棋刺激更容易被检测到。这与之前的研究和我们自己的预测形成对比,尽管这可能与选定的国际象棋场景的特征有关。有趣的是,对短刺激的耐受性通常比长刺激更强,尤其是言语刺激。这些发现表明,复杂事件的动态不能解释先前观察到的言语和动作事件之间同步感知的差异。
{"title":"Short and sweet, or long and complex? Perceiving temporal synchrony in audiovisual events","authors":"Ragnhild Eg, D. Behne","doi":"10.1163/187847612X647298","DOIUrl":"https://doi.org/10.1163/187847612X647298","url":null,"abstract":"Perceived synchrony varies depending on the audiovisual event. Typically, asynchrony is tolerated at greater lead- and lag-times for speech and music than for action events. The tolerance for asynchrony in speech has been attributed to the unity assumption, which proposes a bonding of auditory and visual speech cues through associations in several dimensions. However, the variations in synchrony perception for different audiovisual events may simply be related to their complexity; where speech and music fluctuate naturally, actions involve isolated events and anticipated moments of impact. The current study measured perception of synchrony for long (13 s) and short (1 s) variants of three types of stimuli: (1) action, represented by a game of chess, (2) music, played by a drummer and (3) speech, presented by an anchorwoman in a newscast. The long variants allowed events to play out with their natural dynamics, whereas short variants offered controlled and predictable single actions or events, selected from the longer segments. Results show that among the long stimuli, lead asynchrony was detected sooner for speech than for chess. This contrasts both with previous research and our own predictions, although it may be related to characteristics of the selected chess scene. Interestingly, tolerance to asynchrony was generally greater for short, than for long, stimuli, especially for speech. These findings suggest that the dynamics of complex events cannot account for previously observed differences in synchrony perception between speech and action events.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647298","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64427824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Changes in temporal binding related to decreased vestibular input 与前庭输入减少有关的颞叶结合改变
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647397
N. N. Chang, Alex K. Malone, T. Hullar
Imbalance among patients with vestibular hypofunction has been related to inadequate compensatory eye movements in response to head movements. However, symptoms of imbalance might also occur due a temporal mismatch between vestibular and other balance-related sensory cues. This temporal mismatch could be reflected in a widened temporal binding window (TBW), or the length of time over which simultaneous sensory stimuli may be offset and still perceived as simultaneous. We hypothesized that decreased vestibular input would lead to a widening of the temporal binding window. We performed whole-body rotations about the earth-vertical axis following a sinusoidal trajectory at 0.5 Hz with a peak velocity of 60°/s in four normal subjects. Dichotic auditory clicks were presented through headphones at various phases relative to the rotations. Subjects were asked to indicate whether the cues were synchronous or asynchronous and the TBW was calculated. We then simulated decreased vestibular input by rotating at diminished peak velocities of 48, 24 and 12°/s in four normal subjects. TBW was calculated between ±1 SD away from the mean on the psychometric curve. We found that the TBW increases as amplitude of rotation decreases. Average TBW of 251 ms at 60°/s increased to 309 ms at 12°/s. This result leads to the novel conclusion that changes in temporal processing may be a mechanism for imbalance in patients with vestibular hypofunction.
前庭功能不全患者的不平衡与对头部运动反应的代偿性眼球运动不足有关。然而,不平衡的症状也可能是由于前庭和其他与平衡相关的感官信号在时间上的不匹配而发生的。这种时间上的不匹配可以反映在时间绑定窗口(TBW)的扩大上,或者同时的感觉刺激可能被抵消但仍然被认为是同时的时间长度。我们假设前庭输入的减少会导致颞叶结合窗口的扩大。我们对四名正常受试者进行了以0.5 Hz的峰值速度为60°/s的正弦轨迹绕地球垂直轴的全身旋转。通过耳机呈现相对于旋转的不同阶段的二元听觉滴答声。受试者被要求指出线索是同步的还是异步的,并计算TBW。然后,我们通过在四个正常受试者中以减少的峰值速度48、24和12°/s旋转来模拟前庭输入减少。TBW的计算距离心理测量曲线平均值±1 SD。我们发现,随着旋转幅度的减小,TBW增大。60°/s时平均TBW由251 ms增加到12°/s时的309 ms。这一结果导致新的结论,即颞加工的变化可能是前庭功能障碍患者失衡的机制。
{"title":"Changes in temporal binding related to decreased vestibular input","authors":"N. N. Chang, Alex K. Malone, T. Hullar","doi":"10.1163/187847612X647397","DOIUrl":"https://doi.org/10.1163/187847612X647397","url":null,"abstract":"Imbalance among patients with vestibular hypofunction has been related to inadequate compensatory eye movements in response to head movements. However, symptoms of imbalance might also occur due a temporal mismatch between vestibular and other balance-related sensory cues. This temporal mismatch could be reflected in a widened temporal binding window (TBW), or the length of time over which simultaneous sensory stimuli may be offset and still perceived as simultaneous. We hypothesized that decreased vestibular input would lead to a widening of the temporal binding window. We performed whole-body rotations about the earth-vertical axis following a sinusoidal trajectory at 0.5 Hz with a peak velocity of 60°/s in four normal subjects. Dichotic auditory clicks were presented through headphones at various phases relative to the rotations. Subjects were asked to indicate whether the cues were synchronous or asynchronous and the TBW was calculated. We then simulated decreased vestibular input by rotating at diminished peak velocities of 48, 24 and 12°/s in four normal subjects. TBW was calculated between ±1 SD away from the mean on the psychometric curve. We found that the TBW increases as amplitude of rotation decreases. Average TBW of 251 ms at 60°/s increased to 309 ms at 12°/s. This result leads to the novel conclusion that changes in temporal processing may be a mechanism for imbalance in patients with vestibular hypofunction.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647397","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sources of variance in the audiovisual perception of speech in noise 噪声中语音的视听感知差异的来源
Pub Date : 2012-01-01 DOI: 10.1163/187847612X647568
C. Nahanni, J. Deonarine, M. Paré, K. Munhall
The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.
看到说话人的脸会极大地影响对听觉语言的感知。当受试者在有噪声的情况下接受视听(AV)刺激时,最常观察到这种效应。然而,在已发表的作品中,视觉增加的感知增益的幅度差异很大。在这里,我们报告了一项正在进行的研究的数据,该研究是关于在嘈杂的背景下呈现英语单词时,AV语音感知的个体差异。在AV和纯听觉(AO)两种呈现模式下,以7种信噪比(粉红噪声)呈现大量单音节名词。这些刺激被分成14个块,每块25个单词,每个块使用精妙的数据库被等同于口语频率(Brysbaert and New, 2009)。刺激块的呈现在噪声水平和呈现上是平衡的。Sumby和Pollack(1954)认为,AO和AV的准确度随信号强度的增加而单调增加,听觉信号最弱时视觉增益最大。这些平均结果掩盖了由于主体(听觉和视觉感知的个体差异),刺激(词汇类型,标记发音)和表示(信号和噪声属性)因素造成的相当大的差异。我们将讨论这些方差来源如何阻碍研究之间的比较。
{"title":"Sources of variance in the audiovisual perception of speech in noise","authors":"C. Nahanni, J. Deonarine, M. Paré, K. Munhall","doi":"10.1163/187847612X647568","DOIUrl":"https://doi.org/10.1163/187847612X647568","url":null,"abstract":"The sight of a talker’s face dramatically influences the perception of auditory speech. This effect is most commonly observed when subjects are presented audiovisual (AV) stimuli in the presence of acoustic noise. However, the magnitude of the gain in perception that vision adds varies considerably in published work. Here we report data from an ongoing study of individual differences in AV speech perception when English words are presented in an acoustically noisy background. A large set of monosyllablic nouns was presented at 7 signal-to-noise ratios (pink noise) in both AV and auditory-only (AO) presentation modes. The stimuli were divided into 14 blocks of 25 words and each block was equated for spoken frequency using the SUBTLEXus database (Brysbaert and New, 2009). The presentation of the stimulus blocks was counterbalanced across subjects for noise level and presentation. In agreement with Sumby and Pollack (1954), the accuracy of both AO and AV increase monotonically with signal strength with the greatest visual gain being when the auditory signal was weakest. These average results mask considerable variability due to subject (individual differences in auditory and visual perception), stimulus (lexical type, token articulation) and presentation (signal and noise attributes) factors. We will discuss how these sources of variance impede comparisons between studies.","PeriodicalId":49553,"journal":{"name":"Seeing and Perceiving","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1163/187847612X647568","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64428211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Seeing and Perceiving
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1