Exploring Complexity of Facial Dynamics in Autism Spectrum Disorder

IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2021-09-20 DOI:10.1109/TAFFC.2021.3113876
Pradeep Raj Krishnappa Babu;J. Matias Di Martino;Zhuoqing Chang;Sam Perochon;Kimberly L. H. Carpenter;Scott Compton;Steven Espinosa;Geraldine Dawson;Guillermo Sapiro
{"title":"Exploring Complexity of Facial Dynamics in Autism Spectrum Disorder","authors":"Pradeep Raj Krishnappa Babu;J. Matias Di Martino;Zhuoqing Chang;Sam Perochon;Kimberly L. H. Carpenter;Scott Compton;Steven Espinosa;Geraldine Dawson;Guillermo Sapiro","doi":"10.1109/TAFFC.2021.3113876","DOIUrl":null,"url":null,"abstract":"Atypical facial expression is one of the early symptoms of autism spectrum disorder (ASD) characterized by reduced regularity and lack of coordination of facial movements. Automatic quantification of these behaviors can offer novel biomarkers for screening, diagnosis, and treatment monitoring of ASD. In this work, 40 toddlers with ASD and 396 typically developing toddlers were shown developmentally-appropriate and engaging movies presented on a smart tablet during a well-child pediatric visit. The movies consisted of social and non-social dynamic scenes designed to evoke certain behavioral and affective responses. The front-facing camera of the tablet was used to capture the toddlers’ face. Facial landmarks’ dynamics were then automatically computed using computer vision algorithms. Subsequently, the complexity of the landmarks’ dynamics was estimated for the eyebrows and mouth regions using multiscale entropy. Compared to typically developing toddlers, toddlers with ASD showed higher complexity (i.e., less predictability) in these landmarks’ dynamics. This complexity in facial dynamics contained novel information not captured by traditional facial affect analyses. These results suggest that computer vision analysis of facial landmark movements is a promising approach for detecting and quantifying early behavioral symptoms associated with ASD.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"14 2","pages":"919-930"},"PeriodicalIF":9.6000,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9541259","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/9541259/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 4

Abstract

Atypical facial expression is one of the early symptoms of autism spectrum disorder (ASD) characterized by reduced regularity and lack of coordination of facial movements. Automatic quantification of these behaviors can offer novel biomarkers for screening, diagnosis, and treatment monitoring of ASD. In this work, 40 toddlers with ASD and 396 typically developing toddlers were shown developmentally-appropriate and engaging movies presented on a smart tablet during a well-child pediatric visit. The movies consisted of social and non-social dynamic scenes designed to evoke certain behavioral and affective responses. The front-facing camera of the tablet was used to capture the toddlers’ face. Facial landmarks’ dynamics were then automatically computed using computer vision algorithms. Subsequently, the complexity of the landmarks’ dynamics was estimated for the eyebrows and mouth regions using multiscale entropy. Compared to typically developing toddlers, toddlers with ASD showed higher complexity (i.e., less predictability) in these landmarks’ dynamics. This complexity in facial dynamics contained novel information not captured by traditional facial affect analyses. These results suggest that computer vision analysis of facial landmark movements is a promising approach for detecting and quantifying early behavioral symptoms associated with ASD.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索自闭症谱系障碍面部动力学的复杂性
不典型的面部表情是自闭症谱系障碍(ASD)的早期症状之一,其特征是面部运动的规律性降低和缺乏协调性。这些行为的自动量化可以为ASD的筛查、诊断和治疗监测提供新的生物标志物。在这项工作中,40名患有自闭症谱系障碍的幼儿和396名典型发育中的幼儿在一次儿童健康的儿科就诊中,在智能平板电脑上观看了适合发展的引人入胜的电影。电影由社会和非社会动态场景组成,旨在唤起某些行为和情感反应。平板电脑的前置摄像头被用来捕捉学步儿童的面部。然后使用计算机视觉算法自动计算面部标志的动态。随后,使用多尺度熵对眉毛和嘴巴区域的地标动态的复杂性进行了估计。与典型的发育中的学步儿童相比,患有ASD的学步儿童在这些里程碑的动态方面表现出更高的复杂性(即可预测性较差)。这种面部动力学的复杂性包含了传统面部情感分析无法捕捉到的新信息。这些结果表明,面部标志性运动的计算机视觉分析是检测和量化ASD相关早期行为症状的一种很有前途的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
The ForDigitStress Dataset: A Multi-Modal Dataset for Automatic Stress Recognition Individual-Aware Attention Modulation for Unseen Speaker Emotion Recognition Sparse Emotion Dictionary and CWT Spectrogram Fusion with Multi-head Self-Attention for Depression Recognition in Parkinson's Disease Patients A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method for Conversational Emotion Recognition EEG-Based Cross-Subject Emotion Recognition Using Sparse Bayesian Learning with Enhanced Covariance Alignment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1