学习通过声音模仿面部表情

IF 5.7 1区 心理学 Q1 PSYCHOLOGY, DEVELOPMENTAL Developmental Review Pub Date : 2024-06-18 DOI:10.1016/j.dr.2024.101137
Narain K. Viswanathan , Carina C.J.M. de Klerk , Samuel V. Wass , Louise Goupil
{"title":"学习通过声音模仿面部表情","authors":"Narain K. Viswanathan ,&nbsp;Carina C.J.M. de Klerk ,&nbsp;Samuel V. Wass ,&nbsp;Louise Goupil","doi":"10.1016/j.dr.2024.101137","DOIUrl":null,"url":null,"abstract":"<div><p>The question of how young infants learn to imitate others’ facial expressions has been central in developmental psychology for decades. Facial imitation has been argued to constitute a particularly challenging learning task for infants because facial expressions are perceptually opaque: infants cannot see changes in their own facial configuration when they execute a motor program, so how do they learn to match these gestures with those of their interacting partners? Here we argue that this apparent paradox mainly appears if one focuses only on the visual modality, as most existing work in this field has done so far. When considering other modalities, in particular the auditory modality, many facial expressions are not actually perceptually opaque. In fact, every orolabial expression that is accompanied by vocalisations has specific acoustic consequences, which means that it is relatively transparent in the auditory modality. Here, we describe how this relative perceptual transparency can allow infants to accrue experience relevant for orolabial, facial imitation every time they vocalise. We then detail two specific mechanisms that could support facial imitation learning through the auditory modality. First, we review evidence showing that experiencing correlated proprioceptive and auditory feedback when they vocalise – even when they are alone – enables infants to build audio-motor maps that could later support facial imitation of orolabial actions. Second, we show how these maps could also be used by infants to support imitation even for silent, orolabial facial expressions at a later stage. By considering non-visual perceptual domains, this paper expands our understanding of the ontogeny of facial imitation and offers new directions for future investigations.</p></div>","PeriodicalId":48214,"journal":{"name":"Developmental Review","volume":"73 ","pages":"Article 101137"},"PeriodicalIF":5.7000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0273229724000212/pdfft?md5=b600b32363bc164d99608e88e1cb2665&pid=1-s2.0-S0273229724000212-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Learning to imitate facial expressions through sound\",\"authors\":\"Narain K. Viswanathan ,&nbsp;Carina C.J.M. de Klerk ,&nbsp;Samuel V. Wass ,&nbsp;Louise Goupil\",\"doi\":\"10.1016/j.dr.2024.101137\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The question of how young infants learn to imitate others’ facial expressions has been central in developmental psychology for decades. Facial imitation has been argued to constitute a particularly challenging learning task for infants because facial expressions are perceptually opaque: infants cannot see changes in their own facial configuration when they execute a motor program, so how do they learn to match these gestures with those of their interacting partners? Here we argue that this apparent paradox mainly appears if one focuses only on the visual modality, as most existing work in this field has done so far. When considering other modalities, in particular the auditory modality, many facial expressions are not actually perceptually opaque. In fact, every orolabial expression that is accompanied by vocalisations has specific acoustic consequences, which means that it is relatively transparent in the auditory modality. Here, we describe how this relative perceptual transparency can allow infants to accrue experience relevant for orolabial, facial imitation every time they vocalise. We then detail two specific mechanisms that could support facial imitation learning through the auditory modality. First, we review evidence showing that experiencing correlated proprioceptive and auditory feedback when they vocalise – even when they are alone – enables infants to build audio-motor maps that could later support facial imitation of orolabial actions. Second, we show how these maps could also be used by infants to support imitation even for silent, orolabial facial expressions at a later stage. By considering non-visual perceptual domains, this paper expands our understanding of the ontogeny of facial imitation and offers new directions for future investigations.</p></div>\",\"PeriodicalId\":48214,\"journal\":{\"name\":\"Developmental Review\",\"volume\":\"73 \",\"pages\":\"Article 101137\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0273229724000212/pdfft?md5=b600b32363bc164d99608e88e1cb2665&pid=1-s2.0-S0273229724000212-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Developmental Review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0273229724000212\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, DEVELOPMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Review","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0273229724000212","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

几十年来,幼儿如何学习模仿他人的面部表情一直是发展心理学的核心问题。面部模仿被认为对婴儿来说是一项特别具有挑战性的学习任务,因为面部表情在知觉上是不透明的:婴儿在执行动作程序时看不到自己面部构造的变化,那么他们如何学会将这些手势与互动伙伴的手势相匹配呢?在此,我们认为,如果我们只关注视觉模式,那么这种明显的悖论就会出现,而这一领域迄今为止的大多数研究都是这样做的。如果考虑到其他模式,尤其是听觉模式,许多面部表情实际上并不是不透明的。事实上,每个伴随发声的口唇表情都有特定的声学后果,这意味着它在听觉模式中是相对透明的。在这里,我们将描述这种相对透明的感知如何使婴儿在每次发声时都能积累与口唇和面部模仿相关的经验。然后,我们将详细介绍通过听觉模式支持面部模仿学习的两种具体机制。首先,我们回顾了一些证据,这些证据表明,婴儿在发声时,即使是独自发声,也能体验到相关的本体感觉和听觉反馈,从而建立起听觉运动图谱,这些图谱日后可支持口唇动作的面部模仿。其次,我们还展示了婴儿如何利用这些图谱来支持日后对无声口唇面部表情的模仿。通过考虑非视觉感知领域,本文拓展了我们对面部模仿本体的理解,并为未来的研究提供了新的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Learning to imitate facial expressions through sound

The question of how young infants learn to imitate others’ facial expressions has been central in developmental psychology for decades. Facial imitation has been argued to constitute a particularly challenging learning task for infants because facial expressions are perceptually opaque: infants cannot see changes in their own facial configuration when they execute a motor program, so how do they learn to match these gestures with those of their interacting partners? Here we argue that this apparent paradox mainly appears if one focuses only on the visual modality, as most existing work in this field has done so far. When considering other modalities, in particular the auditory modality, many facial expressions are not actually perceptually opaque. In fact, every orolabial expression that is accompanied by vocalisations has specific acoustic consequences, which means that it is relatively transparent in the auditory modality. Here, we describe how this relative perceptual transparency can allow infants to accrue experience relevant for orolabial, facial imitation every time they vocalise. We then detail two specific mechanisms that could support facial imitation learning through the auditory modality. First, we review evidence showing that experiencing correlated proprioceptive and auditory feedback when they vocalise – even when they are alone – enables infants to build audio-motor maps that could later support facial imitation of orolabial actions. Second, we show how these maps could also be used by infants to support imitation even for silent, orolabial facial expressions at a later stage. By considering non-visual perceptual domains, this paper expands our understanding of the ontogeny of facial imitation and offers new directions for future investigations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Developmental Review
Developmental Review PSYCHOLOGY, DEVELOPMENTAL-
CiteScore
11.00
自引率
3.00%
发文量
27
审稿时长
51 days
期刊介绍: Presenting research that bears on important conceptual issues in developmental psychology, Developmental Review: Perspectives in Behavior and Cognition provides child and developmental, child clinical, and educational psychologists with authoritative articles that reflect current thinking and cover significant scientific developments. The journal emphasizes human developmental processes and gives particular attention to issues relevant to child developmental psychology. The research concerns issues with important implications for the fields of pediatrics, psychiatry, and education, and increases the understanding of socialization processes.
期刊最新文献
Executive function: Debunking an overprized construct Learning to live in the spatial world: Experience-expectant and experience-dependent input Executive functions and social cognition from early childhood to pre-adolescence: A systematic review Judith Rich Harris and child development: 25 years after The Nurture Assumption Chronicle of deceit: Navigating the developmental cognitive landscape from childhood fabrications to prolific adulthood artistry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1