Comparison of Human Experts and AI in Predicting Autism from Facial Behavior.

CEUR workshop proceedings Pub Date : 2023-03-01 Epub Date: 2023-03-16
Evangelos Sariyanidi, Casey J Zampella, Ellis DeJardin, John D Herrington, Robert T Schultz, Birkan Tunc
{"title":"Comparison of Human Experts and AI in Predicting Autism from Facial Behavior.","authors":"Evangelos Sariyanidi, Casey J Zampella, Ellis DeJardin, John D Herrington, Robert T Schultz, Birkan Tunc","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>Advances in computational behavior analysis via artificial intelligence (AI) promise to improve mental healthcare services by providing clinicians with tools to assist diagnosis or measurement of treatment outcomes. This potential has spurred an increasing number of studies in which automated pipelines predict diagnoses of mental health conditions. However, a fundamental question remains unanswered: How do the predictions of the AI algorithms correspond and compare with the predictions of humans? This is a critical question if AI technology is to be used as an assistive tool, because the utility of an AI algorithm would be negligible if it provides little information beyond what clinicians can readily infer. In this paper, we compare the performance of 19 human raters (8 autism experts and 11 non-experts) and that of an AI algorithm in terms of predicting autism diagnosis from short (3-minute) videos of <i>N</i> = 42 participants in a naturalistic conversation. Results show that the AI algorithm achieves an average accuracy of 80.5%, which is comparable to that of clinicians with expertise in autism (83.1%) and clinical research staff without specialized expertise (78.3%). Critically, diagnoses that were inaccurately predicted by most humans (experts and non-experts, alike) were typically correctly predicted by AI. Our results highlight the potential of AI as an assistive tool that can augment clinician diagnostic decision-making.</p>","PeriodicalId":72554,"journal":{"name":"CEUR workshop proceedings","volume":"3359 ITAH","pages":"48-57"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10687770/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"CEUR workshop proceedings","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/3/16 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Advances in computational behavior analysis via artificial intelligence (AI) promise to improve mental healthcare services by providing clinicians with tools to assist diagnosis or measurement of treatment outcomes. This potential has spurred an increasing number of studies in which automated pipelines predict diagnoses of mental health conditions. However, a fundamental question remains unanswered: How do the predictions of the AI algorithms correspond and compare with the predictions of humans? This is a critical question if AI technology is to be used as an assistive tool, because the utility of an AI algorithm would be negligible if it provides little information beyond what clinicians can readily infer. In this paper, we compare the performance of 19 human raters (8 autism experts and 11 non-experts) and that of an AI algorithm in terms of predicting autism diagnosis from short (3-minute) videos of N = 42 participants in a naturalistic conversation. Results show that the AI algorithm achieves an average accuracy of 80.5%, which is comparable to that of clinicians with expertise in autism (83.1%) and clinical research staff without specialized expertise (78.3%). Critically, diagnoses that were inaccurately predicted by most humans (experts and non-experts, alike) were typically correctly predicted by AI. Our results highlight the potential of AI as an assistive tool that can augment clinician diagnostic decision-making.

分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人类专家和人工智能从面部行为预测自闭症的比较。
人工智能(AI)在计算行为分析方面的进展有望改善精神卫生保健服务,为临床医生提供辅助诊断或衡量治疗结果的工具。这种潜力刺激了越来越多的研究,在这些研究中,自动化管道预测心理健康状况的诊断。然而,一个基本的问题仍然没有得到回答:人工智能算法的预测与人类的预测相对应并进行比较?如果人工智能技术被用作辅助工具,这是一个关键问题,因为如果人工智能算法提供的信息很少,超出临床医生可以轻易推断的范围,那么它的效用就可以忽略不计。在本文中,我们比较了19名人类评分者(8名自闭症专家和11名非专家)和人工智能算法在预测自闭症诊断方面的表现,这些评分者来自N = 42名参与者的自然对话中的短视频(3分钟)。结果表明,人工智能算法的平均准确率为80.5%,与具有自闭症专业知识的临床医生(83.1%)和没有专业知识的临床研究人员(78.3%)相当。关键是,大多数人(专家和非专家都一样)预测不准确的诊断通常会被人工智能正确预测。我们的研究结果强调了人工智能作为辅助工具的潜力,可以增强临床医生的诊断决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.10
自引率
0.00%
发文量
0
期刊最新文献
A Privacy-Preserving Unsupervised Speaker Disentanglement Method for Depression Detection from Speech. Learning to Generate Context-Sensitive Backchannel Smiles for Embodied AI Agents with Applications in Mental Health Dialogues. Internet resources for foreign language education in primary school: challenges and opportunities YouTube as an open resource for foreign language learning: a case study of German Ontology-based representation and analysis of conditional vaccine immune responses using Omics data.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1