Identifying Children With Autism Spectrum Disorder via Transformer-Based Representation Learning From Dynamic Facial Cues

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-06-11 DOI:10.1109/TAFFC.2024.3412032
Chen Xia;Hexu Chen;Junwei Han;Dingwen Zhang;Kuan Li
{"title":"Identifying Children With Autism Spectrum Disorder via Transformer-Based Representation Learning From Dynamic Facial Cues","authors":"Chen Xia;Hexu Chen;Junwei Han;Dingwen Zhang;Kuan Li","doi":"10.1109/TAFFC.2024.3412032","DOIUrl":null,"url":null,"abstract":"Recognizing autism spectrum disorder (ASD) has faced great challenges due to insufficient professional clinicians and complex procedures. Automated data-driven ASD recognition models can reduce the subjectivity and physician dependency of traditional evaluation methods. Facial data, which can encode important perceptual and social behaviors, have emerged in ASD research to explore novel biomarkers for screening, diagnosing, and treating ASD. However, existing research mainly focuses on extracting low-level hand-crafted facial features for analysis and classification. Determining how to learn discriminative deep representations from dynamic facial data for computational model construction remains an unresolved challenge. In this study, we propose an ASD recognition model based on facial videos to fill the lack of temporal correlation learning of facial features. First, we utilize a vision transformer to extract frame-based global facial features. Then, we use a Longformer to establish the correlation of facial features over time. In the experiment, we recruited 146 subjects between 2 and 8 years of age to record their facial videos under a computer-based eye-tracking experiment and 76 subjects to conduct a smartphone-based experiment. Quantitative comparisons have shown the effectiveness and reliability of the proposed model. Furthermore, we have confirmed the correlation between facial and eye-tracking modalities in visual attention.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 1","pages":"83-97"},"PeriodicalIF":9.8000,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10553264/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recognizing autism spectrum disorder (ASD) has faced great challenges due to insufficient professional clinicians and complex procedures. Automated data-driven ASD recognition models can reduce the subjectivity and physician dependency of traditional evaluation methods. Facial data, which can encode important perceptual and social behaviors, have emerged in ASD research to explore novel biomarkers for screening, diagnosing, and treating ASD. However, existing research mainly focuses on extracting low-level hand-crafted facial features for analysis and classification. Determining how to learn discriminative deep representations from dynamic facial data for computational model construction remains an unresolved challenge. In this study, we propose an ASD recognition model based on facial videos to fill the lack of temporal correlation learning of facial features. First, we utilize a vision transformer to extract frame-based global facial features. Then, we use a Longformer to establish the correlation of facial features over time. In the experiment, we recruited 146 subjects between 2 and 8 years of age to record their facial videos under a computer-based eye-tracking experiment and 76 subjects to conduct a smartphone-based experiment. Quantitative comparisons have shown the effectiveness and reliability of the proposed model. Furthermore, we have confirmed the correlation between facial and eye-tracking modalities in visual attention.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过基于变压器的动态面部线索表征学习识别自闭症谱系障碍儿童
由于缺乏专业的临床医生和复杂的程序,自闭症谱系障碍的识别面临着巨大的挑战。自动化数据驱动的ASD识别模型可以降低传统评估方法的主观性和对医生的依赖性。面部数据可以编码重要的感知和社会行为,已在ASD研究中出现,以探索筛查,诊断和治疗ASD的新生物标志物。然而,现有的研究主要集中在提取低级的手工面部特征进行分析和分类。如何从动态面部数据中学习判别深度表示用于计算模型构建仍然是一个未解决的挑战。在本研究中,我们提出了一种基于人脸视频的ASD识别模型,以填补人脸特征时间相关学习的不足。首先,我们利用视觉转换器提取基于帧的全局面部特征。然后,我们使用Longformer来建立面部特征随时间的相关性。在实验中,我们招募了146名年龄在2 - 8岁之间的被试进行了基于计算机的眼动追踪实验,并记录了他们的面部视频,76名被试进行了基于智能手机的实验。定量比较表明了所提模型的有效性和可靠性。此外,我们已经证实了面部和眼动追踪模式在视觉注意中的相关性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
Hierarchical Dynamics Aggregation Network for Speech-based Depression Detection Bootstrap Wayfinding Questions to Elicit Emotion Shift Reasoning with Large Language Models PersonalityLLM: Fine-tuning Large Language Models for Personality Assessment from Asynchronous Video Interviews Gait Emotion Recognition via Uncertainty-oriented Class Discriminative Learning MGMIN-FSA: A Multi-Granularity Multimodal Interaction Network for Sentiment Analysis of Financial Review Videos
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1